00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 353 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3015 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.084 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.137 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.173 Using shallow fetch with depth 1 00:00:00.173 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.173 > git --version # timeout=10 00:00:00.197 > git --version # 'git version 2.39.2' 00:00:00.198 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.198 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.198 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.198 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.209 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.221 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:06.221 > git config core.sparsecheckout # timeout=10 00:00:06.233 > git read-tree -mu HEAD # timeout=10 00:00:06.249 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:06.274 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:06.274 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:06.395 [Pipeline] Start of Pipeline 00:00:06.413 [Pipeline] library 00:00:06.415 Loading library shm_lib@master 00:00:06.415 Library shm_lib@master is cached. Copying from home. 00:00:06.435 [Pipeline] node 00:00:21.437 Still waiting to schedule task 00:00:21.437 Waiting for next available executor on ‘vagrant-vm-host’ 00:10:05.432 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:10:05.434 [Pipeline] { 00:10:05.448 [Pipeline] catchError 00:10:05.450 [Pipeline] { 00:10:05.466 [Pipeline] wrap 00:10:05.479 [Pipeline] { 00:10:05.490 [Pipeline] stage 00:10:05.493 [Pipeline] { (Prologue) 00:10:05.518 [Pipeline] echo 00:10:05.520 Node: VM-host-WFP7 00:10:05.527 [Pipeline] cleanWs 00:10:05.538 [WS-CLEANUP] Deleting project workspace... 00:10:05.538 [WS-CLEANUP] Deferred wipeout is used... 00:10:05.544 [WS-CLEANUP] done 00:10:05.769 [Pipeline] setCustomBuildProperty 00:10:05.837 [Pipeline] nodesByLabel 00:10:05.839 Found a total of 1 nodes with the 'sorcerer' label 00:10:05.848 [Pipeline] httpRequest 00:10:05.852 HttpMethod: GET 00:10:05.853 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:10:05.856 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:10:05.858 Response Code: HTTP/1.1 200 OK 00:10:05.858 Success: Status code 200 is in the accepted range: 200,404 00:10:05.859 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:10:05.997 [Pipeline] sh 00:10:06.279 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:10:06.298 [Pipeline] httpRequest 00:10:06.302 HttpMethod: GET 00:10:06.303 URL: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:10:06.303 Sending request to url: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:10:06.304 Response Code: HTTP/1.1 200 OK 00:10:06.304 Success: Status code 200 is in the accepted range: 200,404 00:10:06.305 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:10:08.620 [Pipeline] sh 00:10:08.902 + tar --no-same-owner -xf spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:10:12.221 [Pipeline] sh 00:10:12.528 + git -C spdk log --oneline -n5 00:10:12.528 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:10:12.528 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:10:12.528 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:10:12.528 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:10:12.528 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:10:12.549 [Pipeline] withCredentials 00:10:12.561 > git --version # timeout=10 00:10:12.576 > git --version # 'git version 2.39.2' 00:10:12.593 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:10:12.595 [Pipeline] { 00:10:12.606 [Pipeline] retry 00:10:12.609 [Pipeline] { 00:10:12.626 [Pipeline] sh 00:10:12.904 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:10:13.172 [Pipeline] } 00:10:13.193 [Pipeline] // retry 00:10:13.200 [Pipeline] } 00:10:13.222 [Pipeline] // withCredentials 00:10:13.233 [Pipeline] httpRequest 00:10:13.237 HttpMethod: GET 00:10:13.237 URL: http://10.211.164.96/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:10:13.239 Sending request to url: http://10.211.164.96/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:10:13.241 Response Code: HTTP/1.1 200 OK 00:10:13.241 Success: Status code 200 is in the accepted range: 200,404 00:10:13.242 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:10:14.412 [Pipeline] sh 00:10:14.696 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:10:16.611 [Pipeline] sh 00:10:16.897 + git -C dpdk log --oneline -n5 00:10:16.897 eeb0605f11 version: 23.11.0 00:10:16.897 238778122a doc: update release notes for 23.11 00:10:16.897 46aa6b3cfc doc: fix description of RSS features 00:10:16.897 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:10:16.897 7e421ae345 devtools: support skipping forbid rule check 00:10:16.953 [Pipeline] writeFile 00:10:16.985 [Pipeline] sh 00:10:17.266 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:10:17.276 [Pipeline] sh 00:10:17.556 + cat autorun-spdk.conf 00:10:17.556 SPDK_RUN_FUNCTIONAL_TEST=1 00:10:17.556 SPDK_TEST_NVMF=1 00:10:17.556 SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:17.556 SPDK_TEST_USDT=1 00:10:17.556 SPDK_RUN_UBSAN=1 00:10:17.556 SPDK_TEST_NVMF_MDNS=1 00:10:17.556 NET_TYPE=virt 00:10:17.556 SPDK_JSONRPC_GO_CLIENT=1 00:10:17.556 SPDK_TEST_NATIVE_DPDK=v23.11 00:10:17.556 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:10:17.556 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:17.562 RUN_NIGHTLY=1 00:10:17.567 [Pipeline] } 00:10:17.584 [Pipeline] // stage 00:10:17.599 [Pipeline] stage 00:10:17.601 [Pipeline] { (Run VM) 00:10:17.617 [Pipeline] sh 00:10:17.899 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:10:17.899 + echo 'Start stage prepare_nvme.sh' 00:10:17.899 Start stage prepare_nvme.sh 00:10:17.899 + [[ -n 1 ]] 00:10:17.899 + disk_prefix=ex1 00:10:17.899 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:10:17.899 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:10:17.899 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:10:17.899 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:10:17.899 ++ SPDK_TEST_NVMF=1 00:10:17.899 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:10:17.899 ++ SPDK_TEST_USDT=1 00:10:17.899 ++ SPDK_RUN_UBSAN=1 00:10:17.899 ++ SPDK_TEST_NVMF_MDNS=1 00:10:17.899 ++ NET_TYPE=virt 00:10:17.899 ++ SPDK_JSONRPC_GO_CLIENT=1 00:10:17.899 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:10:17.899 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:10:17.899 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:10:17.899 ++ RUN_NIGHTLY=1 00:10:17.899 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:10:17.899 + nvme_files=() 00:10:17.899 + declare -A nvme_files 00:10:17.899 + backend_dir=/var/lib/libvirt/images/backends 00:10:17.899 + nvme_files['nvme.img']=5G 00:10:17.899 + nvme_files['nvme-cmb.img']=5G 00:10:17.899 + nvme_files['nvme-multi0.img']=4G 00:10:17.899 + nvme_files['nvme-multi1.img']=4G 00:10:17.899 + nvme_files['nvme-multi2.img']=4G 00:10:17.899 + nvme_files['nvme-openstack.img']=8G 00:10:17.899 + nvme_files['nvme-zns.img']=5G 00:10:17.899 + (( SPDK_TEST_NVME_PMR == 1 )) 00:10:17.899 + (( SPDK_TEST_FTL == 1 )) 00:10:17.899 + (( SPDK_TEST_NVME_FDP == 1 )) 00:10:17.899 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:10:17.899 + for nvme in "${!nvme_files[@]}" 00:10:17.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:10:17.899 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:10:17.899 + for nvme in "${!nvme_files[@]}" 00:10:17.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:10:17.899 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:10:17.899 + for nvme in "${!nvme_files[@]}" 00:10:17.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:10:17.899 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:10:17.899 + for nvme in "${!nvme_files[@]}" 00:10:17.899 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:10:18.157 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:10:18.157 + for nvme in "${!nvme_files[@]}" 00:10:18.157 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:10:18.157 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:10:18.157 + for nvme in "${!nvme_files[@]}" 00:10:18.157 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:10:18.157 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:10:18.157 + for nvme in "${!nvme_files[@]}" 00:10:18.157 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:10:19.107 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:10:19.107 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:10:19.107 + echo 'End stage prepare_nvme.sh' 00:10:19.107 End stage prepare_nvme.sh 00:10:19.120 [Pipeline] sh 00:10:19.402 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:10:19.402 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:10:19.402 00:10:19.402 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:10:19.402 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:10:19.402 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:10:19.402 HELP=0 00:10:19.402 DRY_RUN=0 00:10:19.402 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:10:19.402 NVME_DISKS_TYPE=nvme,nvme, 00:10:19.402 NVME_AUTO_CREATE=0 00:10:19.402 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:10:19.402 NVME_CMB=,, 00:10:19.402 NVME_PMR=,, 00:10:19.402 NVME_ZNS=,, 00:10:19.402 NVME_MS=,, 00:10:19.402 NVME_FDP=,, 00:10:19.402 SPDK_VAGRANT_DISTRO=fedora38 00:10:19.402 SPDK_VAGRANT_VMCPU=10 00:10:19.402 SPDK_VAGRANT_VMRAM=12288 00:10:19.402 SPDK_VAGRANT_PROVIDER=libvirt 00:10:19.402 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:10:19.402 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:10:19.402 SPDK_OPENSTACK_NETWORK=0 00:10:19.402 VAGRANT_PACKAGE_BOX=0 00:10:19.402 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:10:19.402 FORCE_DISTRO=true 00:10:19.402 VAGRANT_BOX_VERSION= 00:10:19.402 EXTRA_VAGRANTFILES= 00:10:19.402 NIC_MODEL=virtio 00:10:19.402 00:10:19.402 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:10:19.402 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:10:21.930 Bringing machine 'default' up with 'libvirt' provider... 00:10:22.864 ==> default: Creating image (snapshot of base box volume). 00:10:22.864 ==> default: Creating domain with the following settings... 00:10:22.865 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1714165991_014cb22b6a5440b58d61 00:10:22.865 ==> default: -- Domain type: kvm 00:10:22.865 ==> default: -- Cpus: 10 00:10:22.865 ==> default: -- Feature: acpi 00:10:22.865 ==> default: -- Feature: apic 00:10:22.865 ==> default: -- Feature: pae 00:10:22.865 ==> default: -- Memory: 12288M 00:10:22.865 ==> default: -- Memory Backing: hugepages: 00:10:22.865 ==> default: -- Management MAC: 00:10:22.865 ==> default: -- Loader: 00:10:22.865 ==> default: -- Nvram: 00:10:22.865 ==> default: -- Base box: spdk/fedora38 00:10:22.865 ==> default: -- Storage pool: default 00:10:22.865 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1714165991_014cb22b6a5440b58d61.img (20G) 00:10:22.865 ==> default: -- Volume Cache: default 00:10:22.865 ==> default: -- Kernel: 00:10:22.865 ==> default: -- Initrd: 00:10:22.865 ==> default: -- Graphics Type: vnc 00:10:22.865 ==> default: -- Graphics Port: -1 00:10:22.865 ==> default: -- Graphics IP: 127.0.0.1 00:10:22.865 ==> default: -- Graphics Password: Not defined 00:10:22.865 ==> default: -- Video Type: cirrus 00:10:22.865 ==> default: -- Video VRAM: 9216 00:10:22.865 ==> default: -- Sound Type: 00:10:22.865 ==> default: -- Keymap: en-us 00:10:22.865 ==> default: -- TPM Path: 00:10:22.865 ==> default: -- INPUT: type=mouse, bus=ps2 00:10:22.865 ==> default: -- Command line args: 00:10:22.865 ==> default: -> value=-device, 00:10:22.865 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:10:22.865 ==> default: -> value=-drive, 00:10:22.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:10:22.865 ==> default: -> value=-device, 00:10:22.865 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:22.865 ==> default: -> value=-device, 00:10:22.865 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:10:22.865 ==> default: -> value=-drive, 00:10:22.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:10:22.865 ==> default: -> value=-device, 00:10:22.865 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:22.865 ==> default: -> value=-drive, 00:10:22.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:10:22.865 ==> default: -> value=-device, 00:10:22.865 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:22.865 ==> default: -> value=-drive, 00:10:22.865 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:10:22.865 ==> default: -> value=-device, 00:10:22.865 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:10:23.124 ==> default: Creating shared folders metadata... 00:10:23.124 ==> default: Starting domain. 00:10:24.501 ==> default: Waiting for domain to get an IP address... 00:10:42.586 ==> default: Waiting for SSH to become available... 00:10:43.959 ==> default: Configuring and enabling network interfaces... 00:10:50.523 default: SSH address: 192.168.121.93:22 00:10:50.523 default: SSH username: vagrant 00:10:50.523 default: SSH auth method: private key 00:10:51.903 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:10:58.475 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:11:05.047 ==> default: Mounting SSHFS shared folder... 00:11:06.951 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:11:06.951 ==> default: Checking Mount.. 00:11:08.480 ==> default: Folder Successfully Mounted! 00:11:08.480 ==> default: Running provisioner: file... 00:11:09.415 default: ~/.gitconfig => .gitconfig 00:11:09.674 00:11:09.674 SUCCESS! 00:11:09.674 00:11:09.674 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:11:09.674 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:11:09.674 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:11:09.674 00:11:09.683 [Pipeline] } 00:11:09.701 [Pipeline] // stage 00:11:09.710 [Pipeline] dir 00:11:09.711 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:11:09.713 [Pipeline] { 00:11:09.731 [Pipeline] catchError 00:11:09.733 [Pipeline] { 00:11:09.750 [Pipeline] sh 00:11:10.028 + vagrant ssh-config --host vagrant+ 00:11:10.028 sed -ne+ /^Host/,$p 00:11:10.028 tee ssh_conf 00:11:13.312 Host vagrant 00:11:13.312 HostName 192.168.121.93 00:11:13.312 User vagrant 00:11:13.312 Port 22 00:11:13.312 UserKnownHostsFile /dev/null 00:11:13.312 StrictHostKeyChecking no 00:11:13.312 PasswordAuthentication no 00:11:13.312 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:11:13.312 IdentitiesOnly yes 00:11:13.312 LogLevel FATAL 00:11:13.312 ForwardAgent yes 00:11:13.312 ForwardX11 yes 00:11:13.312 00:11:13.326 [Pipeline] withEnv 00:11:13.328 [Pipeline] { 00:11:13.346 [Pipeline] sh 00:11:13.630 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:11:13.630 source /etc/os-release 00:11:13.630 [[ -e /image.version ]] && img=$(< /image.version) 00:11:13.630 # Minimal, systemd-like check. 00:11:13.630 if [[ -e /.dockerenv ]]; then 00:11:13.630 # Clear garbage from the node's name: 00:11:13.630 # agt-er_autotest_547-896 -> autotest_547-896 00:11:13.630 # $HOSTNAME is the actual container id 00:11:13.630 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:11:13.630 if mountpoint -q /etc/hostname; then 00:11:13.630 # We can assume this is a mount from a host where container is running, 00:11:13.630 # so fetch its hostname to easily identify the target swarm worker. 00:11:13.630 container="$(< /etc/hostname) ($agent)" 00:11:13.630 else 00:11:13.630 # Fallback 00:11:13.630 container=$agent 00:11:13.630 fi 00:11:13.630 fi 00:11:13.630 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:11:13.630 00:11:13.898 [Pipeline] } 00:11:13.912 [Pipeline] // withEnv 00:11:13.918 [Pipeline] setCustomBuildProperty 00:11:13.926 [Pipeline] stage 00:11:13.928 [Pipeline] { (Tests) 00:11:13.939 [Pipeline] sh 00:11:14.216 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:11:14.485 [Pipeline] timeout 00:11:14.486 Timeout set to expire in 40 min 00:11:14.487 [Pipeline] { 00:11:14.504 [Pipeline] sh 00:11:14.783 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:11:15.350 HEAD is now at 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:11:15.361 [Pipeline] sh 00:11:15.638 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:11:15.909 [Pipeline] sh 00:11:16.189 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:11:16.461 [Pipeline] sh 00:11:16.740 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:11:16.997 ++ readlink -f spdk_repo 00:11:16.997 + DIR_ROOT=/home/vagrant/spdk_repo 00:11:16.997 + [[ -n /home/vagrant/spdk_repo ]] 00:11:16.997 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:11:16.997 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:11:16.997 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:11:16.997 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:11:16.997 + [[ -d /home/vagrant/spdk_repo/output ]] 00:11:16.997 + cd /home/vagrant/spdk_repo 00:11:16.997 + source /etc/os-release 00:11:16.997 ++ NAME='Fedora Linux' 00:11:16.997 ++ VERSION='38 (Cloud Edition)' 00:11:16.997 ++ ID=fedora 00:11:16.997 ++ VERSION_ID=38 00:11:16.997 ++ VERSION_CODENAME= 00:11:16.997 ++ PLATFORM_ID=platform:f38 00:11:16.997 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:11:16.997 ++ ANSI_COLOR='0;38;2;60;110;180' 00:11:16.997 ++ LOGO=fedora-logo-icon 00:11:16.997 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:11:16.997 ++ HOME_URL=https://fedoraproject.org/ 00:11:16.997 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:11:16.997 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:11:16.997 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:11:16.997 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:11:16.997 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:11:16.997 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:11:16.997 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:11:16.997 ++ SUPPORT_END=2024-05-14 00:11:16.997 ++ VARIANT='Cloud Edition' 00:11:16.997 ++ VARIANT_ID=cloud 00:11:16.997 + uname -a 00:11:16.997 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:11:16.997 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:17.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.254 Hugepages 00:11:17.254 node hugesize free / total 00:11:17.254 node0 1048576kB 0 / 0 00:11:17.254 node0 2048kB 0 / 0 00:11:17.254 00:11:17.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:17.254 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:17.254 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:11:17.254 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:11:17.254 + rm -f /tmp/spdk-ld-path 00:11:17.254 + source autorun-spdk.conf 00:11:17.254 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:17.254 ++ SPDK_TEST_NVMF=1 00:11:17.254 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:17.254 ++ SPDK_TEST_USDT=1 00:11:17.254 ++ SPDK_RUN_UBSAN=1 00:11:17.254 ++ SPDK_TEST_NVMF_MDNS=1 00:11:17.254 ++ NET_TYPE=virt 00:11:17.254 ++ SPDK_JSONRPC_GO_CLIENT=1 00:11:17.254 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:11:17.254 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:11:17.254 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:17.254 ++ RUN_NIGHTLY=1 00:11:17.254 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:11:17.254 + [[ -n '' ]] 00:11:17.254 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:11:17.511 + for M in /var/spdk/build-*-manifest.txt 00:11:17.511 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:11:17.511 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:17.511 + for M in /var/spdk/build-*-manifest.txt 00:11:17.511 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:11:17.511 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:11:17.511 ++ uname 00:11:17.511 + [[ Linux == \L\i\n\u\x ]] 00:11:17.511 + sudo dmesg -T 00:11:17.511 + sudo dmesg --clear 00:11:17.511 + dmesg_pid=6051 00:11:17.511 + [[ Fedora Linux == FreeBSD ]] 00:11:17.511 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:17.511 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:17.511 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:11:17.511 + [[ -x /usr/src/fio-static/fio ]] 00:11:17.511 + sudo dmesg -Tw 00:11:17.512 + export FIO_BIN=/usr/src/fio-static/fio 00:11:17.512 + FIO_BIN=/usr/src/fio-static/fio 00:11:17.512 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:11:17.512 + [[ ! -v VFIO_QEMU_BIN ]] 00:11:17.512 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:11:17.512 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:17.512 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:17.512 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:11:17.512 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:17.512 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:17.512 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:11:17.512 Test configuration: 00:11:17.512 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:17.512 SPDK_TEST_NVMF=1 00:11:17.512 SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:17.512 SPDK_TEST_USDT=1 00:11:17.512 SPDK_RUN_UBSAN=1 00:11:17.512 SPDK_TEST_NVMF_MDNS=1 00:11:17.512 NET_TYPE=virt 00:11:17.512 SPDK_JSONRPC_GO_CLIENT=1 00:11:17.512 SPDK_TEST_NATIVE_DPDK=v23.11 00:11:17.512 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:11:17.512 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:17.512 RUN_NIGHTLY=1 21:14:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:17.512 21:14:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:11:17.512 21:14:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.512 21:14:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.512 21:14:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.512 21:14:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.512 21:14:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.512 21:14:06 -- paths/export.sh@5 -- $ export PATH 00:11:17.512 21:14:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.512 21:14:06 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:11:17.512 21:14:06 -- common/autobuild_common.sh@435 -- $ date +%s 00:11:17.512 21:14:06 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714166046.XXXXXX 00:11:17.512 21:14:06 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714166046.KRDltn 00:11:17.512 21:14:06 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:11:17.512 21:14:06 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:11:17.512 21:14:06 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:11:17.512 21:14:06 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:11:17.512 21:14:06 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:11:17.512 21:14:06 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:11:17.512 21:14:06 -- common/autobuild_common.sh@451 -- $ get_config_params 00:11:17.512 21:14:06 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:11:17.512 21:14:06 -- common/autotest_common.sh@10 -- $ set +x 00:11:17.512 21:14:06 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:11:17.512 21:14:06 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:11:17.512 21:14:06 -- pm/common@17 -- $ local monitor 00:11:17.512 21:14:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:17.512 21:14:06 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=6087 00:11:17.512 21:14:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:17.512 21:14:06 -- pm/common@21 -- $ date +%s 00:11:17.512 21:14:06 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=6089 00:11:17.512 21:14:06 -- pm/common@26 -- $ sleep 1 00:11:17.512 21:14:06 -- pm/common@21 -- $ date +%s 00:11:17.829 21:14:06 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714166046 00:11:17.829 21:14:06 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714166046 00:11:17.829 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714166046_collect-vmstat.pm.log 00:11:17.829 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714166046_collect-cpu-load.pm.log 00:11:18.765 21:14:07 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:11:18.765 21:14:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:11:18.765 21:14:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:11:18.765 21:14:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:11:18.765 21:14:07 -- spdk/autobuild.sh@16 -- $ date -u 00:11:18.765 Fri Apr 26 09:14:07 PM UTC 2024 00:11:18.765 21:14:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:11:18.765 v24.05-pre-449-g8571999d8 00:11:18.765 21:14:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:11:18.765 21:14:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:11:18.765 21:14:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:11:18.765 21:14:07 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:11:18.765 21:14:07 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:11:18.765 21:14:07 -- common/autotest_common.sh@10 -- $ set +x 00:11:18.765 ************************************ 00:11:18.765 START TEST ubsan 00:11:18.765 ************************************ 00:11:18.765 using ubsan 00:11:18.765 21:14:07 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:11:18.765 00:11:18.765 real 0m0.001s 00:11:18.765 user 0m0.001s 00:11:18.765 sys 0m0.000s 00:11:18.765 21:14:07 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:11:18.765 21:14:07 -- common/autotest_common.sh@10 -- $ set +x 00:11:18.765 ************************************ 00:11:18.765 END TEST ubsan 00:11:18.765 ************************************ 00:11:18.765 21:14:07 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:11:18.765 21:14:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:11:18.765 21:14:07 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:11:18.765 21:14:07 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:11:18.765 21:14:07 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:11:18.765 21:14:07 -- common/autotest_common.sh@10 -- $ set +x 00:11:19.025 ************************************ 00:11:19.025 START TEST build_native_dpdk 00:11:19.025 ************************************ 00:11:19.025 21:14:08 -- common/autotest_common.sh@1111 -- $ _build_native_dpdk 00:11:19.025 21:14:08 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:11:19.025 21:14:08 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:11:19.025 21:14:08 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:11:19.025 21:14:08 -- common/autobuild_common.sh@51 -- $ local compiler 00:11:19.025 21:14:08 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:11:19.025 21:14:08 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:11:19.025 21:14:08 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:11:19.025 21:14:08 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:11:19.025 21:14:08 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:11:19.025 21:14:08 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:11:19.025 21:14:08 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:11:19.025 21:14:08 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:11:19.025 21:14:08 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:11:19.025 21:14:08 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:11:19.025 21:14:08 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:11:19.025 21:14:08 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:11:19.025 21:14:08 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:11:19.025 eeb0605f11 version: 23.11.0 00:11:19.025 238778122a doc: update release notes for 23.11 00:11:19.025 46aa6b3cfc doc: fix description of RSS features 00:11:19.025 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:11:19.025 7e421ae345 devtools: support skipping forbid rule check 00:11:19.025 21:14:08 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:11:19.025 21:14:08 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:11:19.025 21:14:08 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:11:19.025 21:14:08 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:11:19.025 21:14:08 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:11:19.025 21:14:08 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:11:19.025 21:14:08 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:11:19.025 21:14:08 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:11:19.025 21:14:08 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:11:19.025 21:14:08 -- common/autobuild_common.sh@168 -- $ uname -s 00:11:19.025 21:14:08 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:11:19.025 21:14:08 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:11:19.025 21:14:08 -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:11:19.025 21:14:08 -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:11:19.025 21:14:08 -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:11:19.025 21:14:08 -- scripts/common.sh@333 -- $ IFS=.-: 00:11:19.025 21:14:08 -- scripts/common.sh@333 -- $ read -ra ver1 00:11:19.025 21:14:08 -- scripts/common.sh@334 -- $ IFS=.-: 00:11:19.025 21:14:08 -- scripts/common.sh@334 -- $ read -ra ver2 00:11:19.025 21:14:08 -- scripts/common.sh@335 -- $ local 'op=<' 00:11:19.025 21:14:08 -- scripts/common.sh@337 -- $ ver1_l=3 00:11:19.025 21:14:08 -- scripts/common.sh@338 -- $ ver2_l=3 00:11:19.025 21:14:08 -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:11:19.025 21:14:08 -- scripts/common.sh@341 -- $ case "$op" in 00:11:19.025 21:14:08 -- scripts/common.sh@342 -- $ : 1 00:11:19.025 21:14:08 -- scripts/common.sh@361 -- $ (( v = 0 )) 00:11:19.025 21:14:08 -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.025 21:14:08 -- scripts/common.sh@362 -- $ decimal 23 00:11:19.025 21:14:08 -- scripts/common.sh@350 -- $ local d=23 00:11:19.025 21:14:08 -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:11:19.025 21:14:08 -- scripts/common.sh@352 -- $ echo 23 00:11:19.025 21:14:08 -- scripts/common.sh@362 -- $ ver1[v]=23 00:11:19.025 21:14:08 -- scripts/common.sh@363 -- $ decimal 21 00:11:19.025 21:14:08 -- scripts/common.sh@350 -- $ local d=21 00:11:19.026 21:14:08 -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:11:19.026 21:14:08 -- scripts/common.sh@352 -- $ echo 21 00:11:19.026 21:14:08 -- scripts/common.sh@363 -- $ ver2[v]=21 00:11:19.026 21:14:08 -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:11:19.026 21:14:08 -- scripts/common.sh@364 -- $ return 1 00:11:19.026 21:14:08 -- common/autobuild_common.sh@173 -- $ patch -p1 00:11:19.026 patching file config/rte_config.h 00:11:19.026 Hunk #1 succeeded at 60 (offset 1 line). 00:11:19.026 21:14:08 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:11:19.026 21:14:08 -- common/autobuild_common.sh@178 -- $ uname -s 00:11:19.026 21:14:08 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:11:19.026 21:14:08 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:11:19.026 21:14:08 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:11:24.301 The Meson build system 00:11:24.301 Version: 1.3.1 00:11:24.301 Source dir: /home/vagrant/spdk_repo/dpdk 00:11:24.301 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:11:24.301 Build type: native build 00:11:24.301 Program cat found: YES (/usr/bin/cat) 00:11:24.301 Project name: DPDK 00:11:24.301 Project version: 23.11.0 00:11:24.301 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:11:24.301 C linker for the host machine: gcc ld.bfd 2.39-16 00:11:24.301 Host machine cpu family: x86_64 00:11:24.301 Host machine cpu: x86_64 00:11:24.301 Message: ## Building in Developer Mode ## 00:11:24.301 Program pkg-config found: YES (/usr/bin/pkg-config) 00:11:24.301 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:11:24.301 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:11:24.301 Program python3 found: YES (/usr/bin/python3) 00:11:24.301 Program cat found: YES (/usr/bin/cat) 00:11:24.301 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:11:24.301 Compiler for C supports arguments -march=native: YES 00:11:24.301 Checking for size of "void *" : 8 00:11:24.301 Checking for size of "void *" : 8 (cached) 00:11:24.301 Library m found: YES 00:11:24.301 Library numa found: YES 00:11:24.301 Has header "numaif.h" : YES 00:11:24.301 Library fdt found: NO 00:11:24.301 Library execinfo found: NO 00:11:24.301 Has header "execinfo.h" : YES 00:11:24.301 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:11:24.301 Run-time dependency libarchive found: NO (tried pkgconfig) 00:11:24.301 Run-time dependency libbsd found: NO (tried pkgconfig) 00:11:24.301 Run-time dependency jansson found: NO (tried pkgconfig) 00:11:24.301 Run-time dependency openssl found: YES 3.0.9 00:11:24.301 Run-time dependency libpcap found: YES 1.10.4 00:11:24.301 Has header "pcap.h" with dependency libpcap: YES 00:11:24.301 Compiler for C supports arguments -Wcast-qual: YES 00:11:24.301 Compiler for C supports arguments -Wdeprecated: YES 00:11:24.301 Compiler for C supports arguments -Wformat: YES 00:11:24.301 Compiler for C supports arguments -Wformat-nonliteral: NO 00:11:24.301 Compiler for C supports arguments -Wformat-security: NO 00:11:24.301 Compiler for C supports arguments -Wmissing-declarations: YES 00:11:24.301 Compiler for C supports arguments -Wmissing-prototypes: YES 00:11:24.301 Compiler for C supports arguments -Wnested-externs: YES 00:11:24.301 Compiler for C supports arguments -Wold-style-definition: YES 00:11:24.301 Compiler for C supports arguments -Wpointer-arith: YES 00:11:24.301 Compiler for C supports arguments -Wsign-compare: YES 00:11:24.301 Compiler for C supports arguments -Wstrict-prototypes: YES 00:11:24.301 Compiler for C supports arguments -Wundef: YES 00:11:24.301 Compiler for C supports arguments -Wwrite-strings: YES 00:11:24.301 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:11:24.301 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:11:24.301 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:11:24.301 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:11:24.301 Program objdump found: YES (/usr/bin/objdump) 00:11:24.301 Compiler for C supports arguments -mavx512f: YES 00:11:24.301 Checking if "AVX512 checking" compiles: YES 00:11:24.301 Fetching value of define "__SSE4_2__" : 1 00:11:24.301 Fetching value of define "__AES__" : 1 00:11:24.301 Fetching value of define "__AVX__" : 1 00:11:24.301 Fetching value of define "__AVX2__" : 1 00:11:24.301 Fetching value of define "__AVX512BW__" : 1 00:11:24.301 Fetching value of define "__AVX512CD__" : 1 00:11:24.301 Fetching value of define "__AVX512DQ__" : 1 00:11:24.301 Fetching value of define "__AVX512F__" : 1 00:11:24.301 Fetching value of define "__AVX512VL__" : 1 00:11:24.301 Fetching value of define "__PCLMUL__" : 1 00:11:24.301 Fetching value of define "__RDRND__" : 1 00:11:24.301 Fetching value of define "__RDSEED__" : 1 00:11:24.301 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:11:24.301 Fetching value of define "__znver1__" : (undefined) 00:11:24.301 Fetching value of define "__znver2__" : (undefined) 00:11:24.302 Fetching value of define "__znver3__" : (undefined) 00:11:24.302 Fetching value of define "__znver4__" : (undefined) 00:11:24.302 Compiler for C supports arguments -Wno-format-truncation: YES 00:11:24.302 Message: lib/log: Defining dependency "log" 00:11:24.302 Message: lib/kvargs: Defining dependency "kvargs" 00:11:24.302 Message: lib/telemetry: Defining dependency "telemetry" 00:11:24.302 Checking for function "getentropy" : NO 00:11:24.302 Message: lib/eal: Defining dependency "eal" 00:11:24.302 Message: lib/ring: Defining dependency "ring" 00:11:24.302 Message: lib/rcu: Defining dependency "rcu" 00:11:24.302 Message: lib/mempool: Defining dependency "mempool" 00:11:24.302 Message: lib/mbuf: Defining dependency "mbuf" 00:11:24.302 Fetching value of define "__PCLMUL__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512F__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512BW__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512VL__" : 1 (cached) 00:11:24.302 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:11:24.302 Compiler for C supports arguments -mpclmul: YES 00:11:24.302 Compiler for C supports arguments -maes: YES 00:11:24.302 Compiler for C supports arguments -mavx512f: YES (cached) 00:11:24.302 Compiler for C supports arguments -mavx512bw: YES 00:11:24.302 Compiler for C supports arguments -mavx512dq: YES 00:11:24.302 Compiler for C supports arguments -mavx512vl: YES 00:11:24.302 Compiler for C supports arguments -mvpclmulqdq: YES 00:11:24.302 Compiler for C supports arguments -mavx2: YES 00:11:24.302 Compiler for C supports arguments -mavx: YES 00:11:24.302 Message: lib/net: Defining dependency "net" 00:11:24.302 Message: lib/meter: Defining dependency "meter" 00:11:24.302 Message: lib/ethdev: Defining dependency "ethdev" 00:11:24.302 Message: lib/pci: Defining dependency "pci" 00:11:24.302 Message: lib/cmdline: Defining dependency "cmdline" 00:11:24.302 Message: lib/metrics: Defining dependency "metrics" 00:11:24.302 Message: lib/hash: Defining dependency "hash" 00:11:24.302 Message: lib/timer: Defining dependency "timer" 00:11:24.302 Fetching value of define "__AVX512F__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512VL__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512CD__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512BW__" : 1 (cached) 00:11:24.302 Message: lib/acl: Defining dependency "acl" 00:11:24.302 Message: lib/bbdev: Defining dependency "bbdev" 00:11:24.302 Message: lib/bitratestats: Defining dependency "bitratestats" 00:11:24.302 Run-time dependency libelf found: YES 0.190 00:11:24.302 Message: lib/bpf: Defining dependency "bpf" 00:11:24.302 Message: lib/cfgfile: Defining dependency "cfgfile" 00:11:24.302 Message: lib/compressdev: Defining dependency "compressdev" 00:11:24.302 Message: lib/cryptodev: Defining dependency "cryptodev" 00:11:24.302 Message: lib/distributor: Defining dependency "distributor" 00:11:24.302 Message: lib/dmadev: Defining dependency "dmadev" 00:11:24.302 Message: lib/efd: Defining dependency "efd" 00:11:24.302 Message: lib/eventdev: Defining dependency "eventdev" 00:11:24.302 Message: lib/dispatcher: Defining dependency "dispatcher" 00:11:24.302 Message: lib/gpudev: Defining dependency "gpudev" 00:11:24.302 Message: lib/gro: Defining dependency "gro" 00:11:24.302 Message: lib/gso: Defining dependency "gso" 00:11:24.302 Message: lib/ip_frag: Defining dependency "ip_frag" 00:11:24.302 Message: lib/jobstats: Defining dependency "jobstats" 00:11:24.302 Message: lib/latencystats: Defining dependency "latencystats" 00:11:24.302 Message: lib/lpm: Defining dependency "lpm" 00:11:24.302 Fetching value of define "__AVX512F__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512IFMA__" : (undefined) 00:11:24.302 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:11:24.302 Message: lib/member: Defining dependency "member" 00:11:24.302 Message: lib/pcapng: Defining dependency "pcapng" 00:11:24.302 Compiler for C supports arguments -Wno-cast-qual: YES 00:11:24.302 Message: lib/power: Defining dependency "power" 00:11:24.302 Message: lib/rawdev: Defining dependency "rawdev" 00:11:24.302 Message: lib/regexdev: Defining dependency "regexdev" 00:11:24.302 Message: lib/mldev: Defining dependency "mldev" 00:11:24.302 Message: lib/rib: Defining dependency "rib" 00:11:24.302 Message: lib/reorder: Defining dependency "reorder" 00:11:24.302 Message: lib/sched: Defining dependency "sched" 00:11:24.302 Message: lib/security: Defining dependency "security" 00:11:24.302 Message: lib/stack: Defining dependency "stack" 00:11:24.302 Has header "linux/userfaultfd.h" : YES 00:11:24.302 Has header "linux/vduse.h" : YES 00:11:24.302 Message: lib/vhost: Defining dependency "vhost" 00:11:24.302 Message: lib/ipsec: Defining dependency "ipsec" 00:11:24.302 Message: lib/pdcp: Defining dependency "pdcp" 00:11:24.302 Fetching value of define "__AVX512F__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:11:24.302 Fetching value of define "__AVX512BW__" : 1 (cached) 00:11:24.302 Message: lib/fib: Defining dependency "fib" 00:11:24.302 Message: lib/port: Defining dependency "port" 00:11:24.302 Message: lib/pdump: Defining dependency "pdump" 00:11:24.302 Message: lib/table: Defining dependency "table" 00:11:24.302 Message: lib/pipeline: Defining dependency "pipeline" 00:11:24.302 Message: lib/graph: Defining dependency "graph" 00:11:24.302 Message: lib/node: Defining dependency "node" 00:11:24.302 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:11:24.302 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:11:24.302 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:11:25.312 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:11:25.312 Compiler for C supports arguments -Wno-sign-compare: YES 00:11:25.312 Compiler for C supports arguments -Wno-unused-value: YES 00:11:25.312 Compiler for C supports arguments -Wno-format: YES 00:11:25.312 Compiler for C supports arguments -Wno-format-security: YES 00:11:25.312 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:11:25.313 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:11:25.313 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:11:25.313 Compiler for C supports arguments -Wno-unused-parameter: YES 00:11:25.313 Fetching value of define "__AVX512F__" : 1 (cached) 00:11:25.313 Fetching value of define "__AVX512BW__" : 1 (cached) 00:11:25.313 Compiler for C supports arguments -mavx512f: YES (cached) 00:11:25.313 Compiler for C supports arguments -mavx512bw: YES (cached) 00:11:25.313 Compiler for C supports arguments -march=skylake-avx512: YES 00:11:25.313 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:11:25.313 Has header "sys/epoll.h" : YES 00:11:25.313 Program doxygen found: YES (/usr/bin/doxygen) 00:11:25.313 Configuring doxy-api-html.conf using configuration 00:11:25.313 Configuring doxy-api-man.conf using configuration 00:11:25.313 Program mandb found: YES (/usr/bin/mandb) 00:11:25.313 Program sphinx-build found: NO 00:11:25.313 Configuring rte_build_config.h using configuration 00:11:25.313 Message: 00:11:25.313 ================= 00:11:25.313 Applications Enabled 00:11:25.313 ================= 00:11:25.313 00:11:25.313 apps: 00:11:25.313 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:11:25.313 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:11:25.313 test-pmd, test-regex, test-sad, test-security-perf, 00:11:25.313 00:11:25.313 Message: 00:11:25.313 ================= 00:11:25.313 Libraries Enabled 00:11:25.313 ================= 00:11:25.313 00:11:25.313 libs: 00:11:25.313 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:11:25.313 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:11:25.313 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:11:25.313 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:11:25.313 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:11:25.313 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:11:25.313 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:11:25.313 00:11:25.313 00:11:25.313 Message: 00:11:25.313 =============== 00:11:25.313 Drivers Enabled 00:11:25.313 =============== 00:11:25.313 00:11:25.313 common: 00:11:25.313 00:11:25.313 bus: 00:11:25.313 pci, vdev, 00:11:25.313 mempool: 00:11:25.313 ring, 00:11:25.313 dma: 00:11:25.313 00:11:25.313 net: 00:11:25.313 i40e, 00:11:25.313 raw: 00:11:25.313 00:11:25.313 crypto: 00:11:25.313 00:11:25.313 compress: 00:11:25.313 00:11:25.313 regex: 00:11:25.313 00:11:25.313 ml: 00:11:25.313 00:11:25.313 vdpa: 00:11:25.313 00:11:25.313 event: 00:11:25.313 00:11:25.313 baseband: 00:11:25.313 00:11:25.313 gpu: 00:11:25.313 00:11:25.313 00:11:25.313 Message: 00:11:25.313 ================= 00:11:25.313 Content Skipped 00:11:25.313 ================= 00:11:25.313 00:11:25.313 apps: 00:11:25.313 00:11:25.313 libs: 00:11:25.313 00:11:25.313 drivers: 00:11:25.313 common/cpt: not in enabled drivers build config 00:11:25.313 common/dpaax: not in enabled drivers build config 00:11:25.313 common/iavf: not in enabled drivers build config 00:11:25.313 common/idpf: not in enabled drivers build config 00:11:25.313 common/mvep: not in enabled drivers build config 00:11:25.313 common/octeontx: not in enabled drivers build config 00:11:25.313 bus/auxiliary: not in enabled drivers build config 00:11:25.313 bus/cdx: not in enabled drivers build config 00:11:25.313 bus/dpaa: not in enabled drivers build config 00:11:25.313 bus/fslmc: not in enabled drivers build config 00:11:25.313 bus/ifpga: not in enabled drivers build config 00:11:25.313 bus/platform: not in enabled drivers build config 00:11:25.313 bus/vmbus: not in enabled drivers build config 00:11:25.313 common/cnxk: not in enabled drivers build config 00:11:25.313 common/mlx5: not in enabled drivers build config 00:11:25.313 common/nfp: not in enabled drivers build config 00:11:25.313 common/qat: not in enabled drivers build config 00:11:25.313 common/sfc_efx: not in enabled drivers build config 00:11:25.313 mempool/bucket: not in enabled drivers build config 00:11:25.313 mempool/cnxk: not in enabled drivers build config 00:11:25.313 mempool/dpaa: not in enabled drivers build config 00:11:25.313 mempool/dpaa2: not in enabled drivers build config 00:11:25.313 mempool/octeontx: not in enabled drivers build config 00:11:25.313 mempool/stack: not in enabled drivers build config 00:11:25.313 dma/cnxk: not in enabled drivers build config 00:11:25.313 dma/dpaa: not in enabled drivers build config 00:11:25.313 dma/dpaa2: not in enabled drivers build config 00:11:25.313 dma/hisilicon: not in enabled drivers build config 00:11:25.313 dma/idxd: not in enabled drivers build config 00:11:25.313 dma/ioat: not in enabled drivers build config 00:11:25.313 dma/skeleton: not in enabled drivers build config 00:11:25.313 net/af_packet: not in enabled drivers build config 00:11:25.313 net/af_xdp: not in enabled drivers build config 00:11:25.313 net/ark: not in enabled drivers build config 00:11:25.313 net/atlantic: not in enabled drivers build config 00:11:25.313 net/avp: not in enabled drivers build config 00:11:25.313 net/axgbe: not in enabled drivers build config 00:11:25.313 net/bnx2x: not in enabled drivers build config 00:11:25.313 net/bnxt: not in enabled drivers build config 00:11:25.313 net/bonding: not in enabled drivers build config 00:11:25.313 net/cnxk: not in enabled drivers build config 00:11:25.313 net/cpfl: not in enabled drivers build config 00:11:25.313 net/cxgbe: not in enabled drivers build config 00:11:25.313 net/dpaa: not in enabled drivers build config 00:11:25.313 net/dpaa2: not in enabled drivers build config 00:11:25.313 net/e1000: not in enabled drivers build config 00:11:25.313 net/ena: not in enabled drivers build config 00:11:25.313 net/enetc: not in enabled drivers build config 00:11:25.313 net/enetfec: not in enabled drivers build config 00:11:25.313 net/enic: not in enabled drivers build config 00:11:25.313 net/failsafe: not in enabled drivers build config 00:11:25.313 net/fm10k: not in enabled drivers build config 00:11:25.313 net/gve: not in enabled drivers build config 00:11:25.313 net/hinic: not in enabled drivers build config 00:11:25.313 net/hns3: not in enabled drivers build config 00:11:25.313 net/iavf: not in enabled drivers build config 00:11:25.313 net/ice: not in enabled drivers build config 00:11:25.313 net/idpf: not in enabled drivers build config 00:11:25.313 net/igc: not in enabled drivers build config 00:11:25.313 net/ionic: not in enabled drivers build config 00:11:25.313 net/ipn3ke: not in enabled drivers build config 00:11:25.313 net/ixgbe: not in enabled drivers build config 00:11:25.313 net/mana: not in enabled drivers build config 00:11:25.313 net/memif: not in enabled drivers build config 00:11:25.313 net/mlx4: not in enabled drivers build config 00:11:25.313 net/mlx5: not in enabled drivers build config 00:11:25.313 net/mvneta: not in enabled drivers build config 00:11:25.313 net/mvpp2: not in enabled drivers build config 00:11:25.313 net/netvsc: not in enabled drivers build config 00:11:25.313 net/nfb: not in enabled drivers build config 00:11:25.313 net/nfp: not in enabled drivers build config 00:11:25.313 net/ngbe: not in enabled drivers build config 00:11:25.313 net/null: not in enabled drivers build config 00:11:25.313 net/octeontx: not in enabled drivers build config 00:11:25.313 net/octeon_ep: not in enabled drivers build config 00:11:25.313 net/pcap: not in enabled drivers build config 00:11:25.313 net/pfe: not in enabled drivers build config 00:11:25.313 net/qede: not in enabled drivers build config 00:11:25.313 net/ring: not in enabled drivers build config 00:11:25.313 net/sfc: not in enabled drivers build config 00:11:25.314 net/softnic: not in enabled drivers build config 00:11:25.314 net/tap: not in enabled drivers build config 00:11:25.314 net/thunderx: not in enabled drivers build config 00:11:25.314 net/txgbe: not in enabled drivers build config 00:11:25.314 net/vdev_netvsc: not in enabled drivers build config 00:11:25.314 net/vhost: not in enabled drivers build config 00:11:25.314 net/virtio: not in enabled drivers build config 00:11:25.314 net/vmxnet3: not in enabled drivers build config 00:11:25.314 raw/cnxk_bphy: not in enabled drivers build config 00:11:25.314 raw/cnxk_gpio: not in enabled drivers build config 00:11:25.314 raw/dpaa2_cmdif: not in enabled drivers build config 00:11:25.314 raw/ifpga: not in enabled drivers build config 00:11:25.314 raw/ntb: not in enabled drivers build config 00:11:25.314 raw/skeleton: not in enabled drivers build config 00:11:25.314 crypto/armv8: not in enabled drivers build config 00:11:25.314 crypto/bcmfs: not in enabled drivers build config 00:11:25.314 crypto/caam_jr: not in enabled drivers build config 00:11:25.314 crypto/ccp: not in enabled drivers build config 00:11:25.314 crypto/cnxk: not in enabled drivers build config 00:11:25.314 crypto/dpaa_sec: not in enabled drivers build config 00:11:25.314 crypto/dpaa2_sec: not in enabled drivers build config 00:11:25.314 crypto/ipsec_mb: not in enabled drivers build config 00:11:25.314 crypto/mlx5: not in enabled drivers build config 00:11:25.314 crypto/mvsam: not in enabled drivers build config 00:11:25.314 crypto/nitrox: not in enabled drivers build config 00:11:25.314 crypto/null: not in enabled drivers build config 00:11:25.314 crypto/octeontx: not in enabled drivers build config 00:11:25.314 crypto/openssl: not in enabled drivers build config 00:11:25.314 crypto/scheduler: not in enabled drivers build config 00:11:25.314 crypto/uadk: not in enabled drivers build config 00:11:25.314 crypto/virtio: not in enabled drivers build config 00:11:25.314 compress/isal: not in enabled drivers build config 00:11:25.314 compress/mlx5: not in enabled drivers build config 00:11:25.314 compress/octeontx: not in enabled drivers build config 00:11:25.314 compress/zlib: not in enabled drivers build config 00:11:25.314 regex/mlx5: not in enabled drivers build config 00:11:25.314 regex/cn9k: not in enabled drivers build config 00:11:25.314 ml/cnxk: not in enabled drivers build config 00:11:25.314 vdpa/ifc: not in enabled drivers build config 00:11:25.314 vdpa/mlx5: not in enabled drivers build config 00:11:25.314 vdpa/nfp: not in enabled drivers build config 00:11:25.314 vdpa/sfc: not in enabled drivers build config 00:11:25.314 event/cnxk: not in enabled drivers build config 00:11:25.314 event/dlb2: not in enabled drivers build config 00:11:25.314 event/dpaa: not in enabled drivers build config 00:11:25.314 event/dpaa2: not in enabled drivers build config 00:11:25.314 event/dsw: not in enabled drivers build config 00:11:25.314 event/opdl: not in enabled drivers build config 00:11:25.314 event/skeleton: not in enabled drivers build config 00:11:25.314 event/sw: not in enabled drivers build config 00:11:25.314 event/octeontx: not in enabled drivers build config 00:11:25.314 baseband/acc: not in enabled drivers build config 00:11:25.314 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:11:25.314 baseband/fpga_lte_fec: not in enabled drivers build config 00:11:25.314 baseband/la12xx: not in enabled drivers build config 00:11:25.314 baseband/null: not in enabled drivers build config 00:11:25.314 baseband/turbo_sw: not in enabled drivers build config 00:11:25.314 gpu/cuda: not in enabled drivers build config 00:11:25.314 00:11:25.314 00:11:25.314 Build targets in project: 217 00:11:25.314 00:11:25.314 DPDK 23.11.0 00:11:25.314 00:11:25.314 User defined options 00:11:25.314 libdir : lib 00:11:25.314 prefix : /home/vagrant/spdk_repo/dpdk/build 00:11:25.314 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:11:25.314 c_link_args : 00:11:25.314 enable_docs : false 00:11:25.314 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:11:25.314 enable_kmods : false 00:11:25.314 machine : native 00:11:25.314 tests : false 00:11:25.314 00:11:25.314 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:11:25.314 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:11:25.314 21:14:14 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:11:25.314 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:11:25.572 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:11:25.572 [2/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:11:25.572 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:11:25.572 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:11:25.572 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:11:25.572 [6/707] Linking static target lib/librte_kvargs.a 00:11:25.572 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:11:25.572 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:11:25.572 [9/707] Linking static target lib/librte_log.a 00:11:25.572 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:11:25.831 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:11:25.831 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:11:25.831 [13/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:11:25.831 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:11:26.090 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:11:26.090 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:11:26.090 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:11:26.090 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:11:26.090 [19/707] Linking target lib/librte_log.so.24.0 00:11:26.090 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:11:26.349 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:11:26.349 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:11:26.349 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:11:26.349 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:11:26.349 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:11:26.350 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:11:26.350 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:11:26.608 [28/707] Linking target lib/librte_kvargs.so.24.0 00:11:26.608 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:11:26.608 [30/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:11:26.608 [31/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:11:26.608 [32/707] Linking static target lib/librte_telemetry.a 00:11:26.608 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:11:26.608 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:11:26.608 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:11:26.608 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:11:26.608 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:11:26.867 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:11:26.867 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:11:26.867 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:11:26.867 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:11:26.867 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:11:27.125 [43/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:11:27.125 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:11:27.125 [45/707] Linking target lib/librte_telemetry.so.24.0 00:11:27.125 [46/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:11:27.125 [47/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:11:27.125 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:11:27.125 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:11:27.125 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:11:27.383 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:11:27.383 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:11:27.383 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:11:27.383 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:11:27.383 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:11:27.383 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:11:27.642 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:11:27.642 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:11:27.642 [59/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:11:27.642 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:11:27.642 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:11:27.642 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:11:27.642 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:11:27.642 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:11:27.642 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:11:27.642 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:11:27.642 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:11:27.902 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:11:27.902 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:11:27.902 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:11:27.902 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:11:28.161 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:11:28.161 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:11:28.161 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:11:28.161 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:11:28.161 [76/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:11:28.161 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:11:28.161 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:11:28.421 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:11:28.421 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:11:28.421 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:11:28.421 [82/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:11:28.421 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:11:28.421 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:11:28.421 [85/707] Linking static target lib/librte_ring.a 00:11:28.680 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:11:28.680 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:11:28.680 [88/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:11:28.680 [89/707] Linking static target lib/librte_eal.a 00:11:28.680 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:11:28.680 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:11:28.680 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:11:28.680 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:11:28.940 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:11:28.940 [95/707] Linking static target lib/librte_mempool.a 00:11:29.198 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:11:29.199 [97/707] Linking static target lib/librte_rcu.a 00:11:29.199 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:11:29.199 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:11:29.199 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:11:29.199 [101/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:11:29.199 [102/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:11:29.199 [103/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:11:29.199 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:11:29.458 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:11:29.458 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:11:29.458 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:11:29.458 [108/707] Linking static target lib/librte_net.a 00:11:29.458 [109/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:11:29.458 [110/707] Linking static target lib/librte_meter.a 00:11:29.718 [111/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:11:29.718 [112/707] Linking static target lib/librte_mbuf.a 00:11:29.718 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:11:29.718 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:11:29.718 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:11:29.718 [116/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:11:29.718 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:11:29.718 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:11:29.977 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:11:30.237 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:11:30.237 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:11:30.501 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:11:30.501 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:11:30.501 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:11:30.769 [125/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:11:30.769 [126/707] Linking static target lib/librte_pci.a 00:11:30.769 [127/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:11:30.769 [128/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:11:30.769 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:11:30.769 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:11:30.769 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:11:30.769 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:11:30.769 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:31.029 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:11:31.029 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:11:31.029 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:11:31.029 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:11:31.029 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:11:31.029 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:11:31.029 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:11:31.029 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:11:31.029 [142/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:11:31.029 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:11:31.288 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:11:31.288 [145/707] Linking static target lib/librte_cmdline.a 00:11:31.547 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:11:31.547 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:11:31.547 [148/707] Linking static target lib/librte_metrics.a 00:11:31.547 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:11:31.806 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:11:31.806 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:11:31.806 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:11:31.806 [153/707] Linking static target lib/librte_timer.a 00:11:32.065 [154/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:11:32.065 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:11:32.065 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:11:32.324 [157/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:11:32.324 [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:11:32.324 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:11:32.324 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:11:32.891 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:11:32.891 [162/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:11:32.891 [163/707] Linking static target lib/librte_bitratestats.a 00:11:33.149 [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:11:33.149 [165/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:11:33.149 [166/707] Linking static target lib/librte_bbdev.a 00:11:33.149 [167/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:11:33.149 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:11:33.408 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:11:33.667 [170/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:33.667 [171/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:11:33.667 [172/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:11:33.667 [173/707] Linking static target lib/librte_hash.a 00:11:33.925 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:11:33.925 [175/707] Linking static target lib/librte_ethdev.a 00:11:33.925 [176/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:11:33.925 [177/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:11:33.925 [178/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:11:33.925 [179/707] Linking static target lib/acl/libavx2_tmp.a 00:11:33.925 [180/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:11:34.183 [181/707] Linking target lib/librte_eal.so.24.0 00:11:34.183 [182/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:11:34.183 [183/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:11:34.183 [184/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:11:34.183 [185/707] Linking target lib/librte_ring.so.24.0 00:11:34.183 [186/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:11:34.183 [187/707] Linking target lib/librte_meter.so.24.0 00:11:34.183 [188/707] Linking target lib/librte_pci.so.24.0 00:11:34.442 [189/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:11:34.442 [190/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:11:34.442 [191/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:11:34.442 [192/707] Linking target lib/librte_rcu.so.24.0 00:11:34.442 [193/707] Linking target lib/librte_mempool.so.24.0 00:11:34.442 [194/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:11:34.442 [195/707] Linking target lib/librte_timer.so.24.0 00:11:34.442 [196/707] Linking static target lib/librte_cfgfile.a 00:11:34.442 [197/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:11:34.442 [198/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:11:34.442 [199/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:11:34.442 [200/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:11:34.442 [201/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:11:34.702 [202/707] Linking target lib/librte_mbuf.so.24.0 00:11:34.702 [203/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:11:34.702 [204/707] Linking static target lib/librte_acl.a 00:11:34.702 [205/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:11:34.702 [206/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:11:34.702 [207/707] Linking target lib/librte_net.so.24.0 00:11:34.702 [208/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:11:34.702 [209/707] Linking target lib/librte_cfgfile.so.24.0 00:11:34.702 [210/707] Linking target lib/librte_bbdev.so.24.0 00:11:34.962 [211/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:11:34.962 [212/707] Linking static target lib/librte_bpf.a 00:11:34.962 [213/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:11:34.962 [214/707] Linking target lib/librte_cmdline.so.24.0 00:11:34.962 [215/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:11:34.962 [216/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:11:34.962 [217/707] Linking target lib/librte_hash.so.24.0 00:11:34.962 [218/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:11:34.962 [219/707] Linking target lib/librte_acl.so.24.0 00:11:34.962 [220/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:11:35.226 [221/707] Linking static target lib/librte_compressdev.a 00:11:35.226 [222/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:11:35.226 [223/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:11:35.226 [224/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:11:35.226 [225/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:11:35.226 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:11:35.492 [227/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:11:35.492 [228/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:11:35.492 [229/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:35.492 [230/707] Linking target lib/librte_compressdev.so.24.0 00:11:35.752 [231/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:11:35.752 [232/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:11:35.752 [233/707] Linking static target lib/librte_distributor.a 00:11:35.752 [234/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:11:35.752 [235/707] Linking static target lib/librte_dmadev.a 00:11:36.011 [236/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:11:36.011 [237/707] Linking target lib/librte_distributor.so.24.0 00:11:36.011 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:11:36.011 [239/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:36.011 [240/707] Linking target lib/librte_dmadev.so.24.0 00:11:36.270 [241/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:11:36.270 [242/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:11:36.270 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:11:36.530 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:11:36.530 [245/707] Linking static target lib/librte_efd.a 00:11:36.788 [246/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:11:36.789 [247/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:11:36.789 [248/707] Linking target lib/librte_efd.so.24.0 00:11:36.789 [249/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:11:36.789 [250/707] Linking static target lib/librte_dispatcher.a 00:11:36.789 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:11:37.048 [252/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:11:37.048 [253/707] Linking static target lib/librte_cryptodev.a 00:11:37.048 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:11:37.048 [255/707] Linking static target lib/librte_gpudev.a 00:11:37.307 [256/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:11:37.307 [257/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:11:37.307 [258/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:11:37.307 [259/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:11:37.566 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:11:37.825 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:11:37.825 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:11:37.825 [263/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:11:37.825 [264/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:37.825 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:11:37.825 [266/707] Linking target lib/librte_gpudev.so.24.0 00:11:38.083 [267/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:11:38.083 [268/707] Linking static target lib/librte_gro.a 00:11:38.083 [269/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:11:38.084 [270/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:38.084 [271/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:38.084 [272/707] Linking target lib/librte_cryptodev.so.24.0 00:11:38.084 [273/707] Linking target lib/librte_ethdev.so.24.0 00:11:38.084 [274/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:11:38.343 [275/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:11:38.343 [276/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:11:38.343 [277/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:11:38.343 [278/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:11:38.343 [279/707] Linking static target lib/librte_eventdev.a 00:11:38.343 [280/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:11:38.343 [281/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:11:38.343 [282/707] Linking target lib/librte_bpf.so.24.0 00:11:38.343 [283/707] Linking target lib/librte_metrics.so.24.0 00:11:38.343 [284/707] Linking target lib/librte_gro.so.24.0 00:11:38.343 [285/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:11:38.343 [286/707] Linking static target lib/librte_gso.a 00:11:38.343 [287/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:11:38.343 [288/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:11:38.343 [289/707] Linking target lib/librte_bitratestats.so.24.0 00:11:38.603 [290/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:11:38.603 [291/707] Linking target lib/librte_gso.so.24.0 00:11:38.603 [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:11:38.603 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:11:38.603 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:11:38.603 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:11:38.862 [296/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:11:38.862 [297/707] Linking static target lib/librte_jobstats.a 00:11:38.862 [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:11:38.862 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:11:38.862 [300/707] Linking static target lib/librte_ip_frag.a 00:11:39.121 [301/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:11:39.121 [302/707] Linking static target lib/librte_latencystats.a 00:11:39.121 [303/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:11:39.121 [304/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:11:39.121 [305/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:11:39.121 [306/707] Linking target lib/librte_jobstats.so.24.0 00:11:39.121 [307/707] Linking target lib/librte_ip_frag.so.24.0 00:11:39.121 [308/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:11:39.121 [309/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:11:39.121 [310/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:11:39.121 [311/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:11:39.380 [312/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:11:39.380 [313/707] Linking target lib/librte_latencystats.so.24.0 00:11:39.380 [314/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:11:39.380 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:11:39.380 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:11:39.639 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:11:39.639 [318/707] Linking static target lib/librte_lpm.a 00:11:39.639 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:11:39.639 [320/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:11:39.639 [321/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:11:39.897 [322/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:11:39.897 [323/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:11:39.897 [324/707] Linking static target lib/librte_pcapng.a 00:11:39.897 [325/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:11:39.897 [326/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:11:39.897 [327/707] Linking target lib/librte_lpm.so.24.0 00:11:40.158 [328/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:11:40.158 [329/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:11:40.158 [330/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:11:40.158 [331/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:11:40.158 [332/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:40.158 [333/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:11:40.158 [334/707] Linking target lib/librte_pcapng.so.24.0 00:11:40.158 [335/707] Linking target lib/librte_eventdev.so.24.0 00:11:40.418 [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:11:40.418 [337/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:11:40.418 [338/707] Linking target lib/librte_dispatcher.so.24.0 00:11:40.418 [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:11:40.418 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:11:40.418 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:11:40.418 [342/707] Linking static target lib/librte_power.a 00:11:40.418 [343/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:11:40.678 [344/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:11:40.678 [345/707] Linking static target lib/librte_rawdev.a 00:11:40.678 [346/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:11:40.678 [347/707] Linking static target lib/librte_regexdev.a 00:11:40.678 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:11:40.678 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:11:40.678 [350/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:11:40.945 [351/707] Linking static target lib/librte_member.a 00:11:40.945 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:11:40.945 [353/707] Linking static target lib/librte_mldev.a 00:11:40.945 [354/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:11:40.945 [355/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:11:40.945 [356/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:40.945 [357/707] Linking target lib/librte_power.so.24.0 00:11:41.226 [358/707] Linking target lib/librte_rawdev.so.24.0 00:11:41.226 [359/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:11:41.226 [360/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:11:41.226 [361/707] Linking target lib/librte_member.so.24.0 00:11:41.226 [362/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:11:41.226 [363/707] Linking static target lib/librte_reorder.a 00:11:41.226 [364/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:11:41.226 [365/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:41.226 [366/707] Linking target lib/librte_regexdev.so.24.0 00:11:41.226 [367/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:11:41.226 [368/707] Linking static target lib/librte_rib.a 00:11:41.485 [369/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:11:41.485 [370/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:11:41.485 [371/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:11:41.485 [372/707] Linking target lib/librte_reorder.so.24.0 00:11:41.485 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:11:41.485 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:11:41.485 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:11:41.485 [376/707] Linking static target lib/librte_stack.a 00:11:41.485 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:11:41.745 [378/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:11:41.745 [379/707] Linking static target lib/librte_security.a 00:11:41.745 [380/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:11:41.745 [381/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:11:41.745 [382/707] Linking target lib/librte_rib.so.24.0 00:11:41.745 [383/707] Linking target lib/librte_stack.so.24.0 00:11:41.745 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:11:42.005 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:11:42.005 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:11:42.005 [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:42.005 [388/707] Linking target lib/librte_mldev.so.24.0 00:11:42.005 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:11:42.005 [390/707] Linking target lib/librte_security.so.24.0 00:11:42.005 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:11:42.265 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:11:42.265 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:11:42.265 [394/707] Linking static target lib/librte_sched.a 00:11:42.528 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:11:42.528 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:11:42.528 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:11:42.788 [398/707] Linking target lib/librte_sched.so.24.0 00:11:42.788 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:11:42.788 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:11:42.788 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:11:42.788 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:11:43.047 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:11:43.306 [404/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:11:43.306 [405/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:11:43.306 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:11:43.306 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:11:43.568 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:11:43.568 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:11:43.568 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:11:43.568 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:11:43.568 [412/707] Linking static target lib/librte_ipsec.a 00:11:43.568 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:11:43.827 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:11:43.827 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:11:43.827 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:11:44.085 [417/707] Linking target lib/librte_ipsec.so.24.0 00:11:44.085 [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:11:44.085 [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:11:44.085 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:11:44.653 [421/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:11:44.653 [422/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:11:44.653 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:11:44.653 [424/707] Linking static target lib/librte_fib.a 00:11:44.653 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:11:44.653 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:11:44.912 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:11:44.912 [428/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:11:44.912 [429/707] Linking target lib/librte_fib.so.24.0 00:11:44.912 [430/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:11:44.912 [431/707] Linking static target lib/librte_pdcp.a 00:11:44.912 [432/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:11:45.171 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:11:45.171 [434/707] Linking target lib/librte_pdcp.so.24.0 00:11:45.432 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:11:45.432 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:11:45.432 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:11:45.432 [438/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:11:45.432 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:11:45.697 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:11:45.965 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:11:45.965 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:11:45.965 [443/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:11:45.965 [444/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:11:45.965 [445/707] Linking static target lib/librte_port.a 00:11:45.965 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:11:45.965 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:11:45.965 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:11:46.236 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:11:46.236 [450/707] Linking static target lib/librte_pdump.a 00:11:46.236 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:11:46.509 [452/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:11:46.509 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:11:46.509 [454/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:11:46.509 [455/707] Linking target lib/librte_port.so.24.0 00:11:46.509 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:11:46.509 [457/707] Linking target lib/librte_pdump.so.24.0 00:11:46.509 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:11:47.082 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:11:47.082 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:11:47.082 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:11:47.082 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:11:47.082 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:11:47.082 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:11:47.342 [465/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:11:47.342 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:11:47.601 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:11:47.601 [468/707] Linking static target lib/librte_table.a 00:11:47.601 [469/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:11:47.601 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:11:47.860 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:11:48.121 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:11:48.121 [473/707] Linking target lib/librte_table.so.24.0 00:11:48.121 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:11:48.121 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:11:48.121 [476/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:11:48.121 [477/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:11:48.384 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:11:48.646 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:11:48.646 [480/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:11:48.646 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:11:48.646 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:11:48.905 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:11:49.164 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:11:49.164 [485/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:11:49.164 [486/707] Linking static target lib/librte_graph.a 00:11:49.164 [487/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:11:49.164 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:11:49.164 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:11:49.424 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:11:49.683 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:11:49.683 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:11:49.683 [493/707] Linking target lib/librte_graph.so.24.0 00:11:49.942 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:11:49.942 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:11:49.942 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:11:49.942 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:11:50.202 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:11:50.202 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:11:50.202 [500/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:11:50.468 [501/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:11:50.468 [502/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:11:50.468 [503/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:11:50.468 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:11:50.727 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:11:50.727 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:11:50.727 [507/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:11:50.727 [508/707] Linking static target lib/librte_node.a 00:11:50.727 [509/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:11:50.727 [510/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:11:50.727 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:11:50.986 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:11:50.986 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:11:50.986 [514/707] Linking target lib/librte_node.so.24.0 00:11:51.246 [515/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:11:51.246 [516/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:11:51.246 [517/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:11:51.246 [518/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:11:51.246 [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:11:51.246 [520/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:51.246 [521/707] Linking static target drivers/librte_bus_vdev.a 00:11:51.563 [522/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:11:51.563 [523/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:51.563 [524/707] Linking static target drivers/librte_bus_pci.a 00:11:51.563 [525/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:11:51.563 [526/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:11:51.563 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:11:51.563 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:11:51.563 [529/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:11:51.563 [530/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:11:51.563 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:11:51.831 [532/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:11:51.831 [533/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:11:51.831 [534/707] Linking target drivers/librte_bus_pci.so.24.0 00:11:51.831 [535/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:11:51.831 [536/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:11:52.098 [537/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:11:52.098 [538/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:11:52.098 [539/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:11:52.098 [540/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:52.098 [541/707] Linking static target drivers/librte_mempool_ring.a 00:11:52.098 [542/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:11:52.098 [543/707] Linking target drivers/librte_mempool_ring.so.24.0 00:11:52.368 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:11:52.630 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:11:52.889 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:11:52.889 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:11:53.458 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:11:53.718 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:11:53.718 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:11:53.718 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:11:53.718 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:11:53.718 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:11:53.976 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:11:53.976 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:11:54.234 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:11:54.234 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:11:54.493 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:11:54.493 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:11:54.752 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:11:54.752 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:11:55.012 [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:11:55.012 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:11:55.272 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:11:55.272 [565/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:11:55.272 [566/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:11:55.532 [567/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:11:55.532 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:11:55.532 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:11:55.801 [570/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:11:55.801 [571/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:11:55.801 [572/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:11:55.801 [573/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:11:56.078 [574/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:11:56.078 [575/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:11:56.078 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:11:56.336 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:11:56.336 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:11:56.336 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:11:56.595 [580/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:11:56.595 [581/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:11:56.595 [582/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:11:56.854 [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:11:56.854 [584/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:11:56.854 [585/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:11:56.854 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:11:56.854 [587/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:11:56.854 [588/707] Linking static target drivers/librte_net_i40e.a 00:11:57.113 [589/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:11:57.113 [590/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:11:57.373 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:11:57.373 [592/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:11:57.373 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:11:57.632 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:11:57.632 [595/707] Linking target drivers/librte_net_i40e.so.24.0 00:11:57.632 [596/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:11:57.632 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:11:57.890 [598/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:11:57.890 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:11:58.148 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:11:58.148 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:11:58.408 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:11:58.408 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:11:58.408 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:11:58.408 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:11:58.408 [606/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:11:58.666 [607/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:11:58.666 [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:11:58.666 [609/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:11:58.666 [610/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:11:58.925 [611/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:11:58.925 [612/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:11:59.185 [613/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:11:59.185 [614/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:11:59.185 [615/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:11:59.446 [616/707] Linking static target lib/librte_vhost.a 00:11:59.446 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:11:59.446 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:12:00.013 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:12:00.013 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:12:00.271 [621/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:12:00.272 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:12:00.272 [623/707] Linking target lib/librte_vhost.so.24.0 00:12:00.272 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:12:00.272 [625/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:12:00.272 [626/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:12:00.529 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:12:00.529 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:12:00.790 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:12:00.790 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:12:00.790 [631/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:12:00.790 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:12:00.790 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:12:00.790 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:12:01.054 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:12:01.311 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:12:01.311 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:12:01.311 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:12:01.311 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:12:01.569 [640/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:12:01.569 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:12:01.569 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:12:01.569 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:12:01.828 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:12:01.828 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:12:01.828 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:12:01.828 [647/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:12:02.088 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:12:02.088 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:12:02.088 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:12:02.088 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:12:02.346 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:12:02.604 [653/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:12:02.604 [654/707] Linking static target lib/librte_pipeline.a 00:12:02.604 [655/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:12:02.604 [656/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:12:02.604 [657/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:12:02.862 [658/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:12:02.862 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:12:02.862 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:12:02.862 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:12:02.862 [662/707] Linking target app/dpdk-dumpcap 00:12:02.862 [663/707] Linking target app/dpdk-graph 00:12:03.119 [664/707] Linking target app/dpdk-pdump 00:12:03.119 [665/707] Linking target app/dpdk-proc-info 00:12:03.378 [666/707] Linking target app/dpdk-test-acl 00:12:03.378 [667/707] Linking target app/dpdk-test-bbdev 00:12:03.378 [668/707] Linking target app/dpdk-test-cmdline 00:12:03.378 [669/707] Linking target app/dpdk-test-crypto-perf 00:12:03.378 [670/707] Linking target app/dpdk-test-compress-perf 00:12:03.636 [671/707] Linking target app/dpdk-test-dma-perf 00:12:03.636 [672/707] Linking target app/dpdk-test-eventdev 00:12:03.636 [673/707] Linking target app/dpdk-test-fib 00:12:03.893 [674/707] Linking target app/dpdk-test-flow-perf 00:12:03.893 [675/707] Linking target app/dpdk-test-gpudev 00:12:03.893 [676/707] Linking target app/dpdk-test-mldev 00:12:03.893 [677/707] Linking target app/dpdk-test-pipeline 00:12:04.150 [678/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:12:04.408 [679/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:12:04.408 [680/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:12:04.408 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:12:04.408 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:12:04.408 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:12:04.666 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:12:04.925 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:12:04.925 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:12:04.925 [687/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:12:04.925 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:12:04.925 [689/707] Linking target lib/librte_pipeline.so.24.0 00:12:05.184 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:12:05.184 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:12:05.443 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:12:05.443 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:12:05.701 [694/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:12:05.701 [695/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:12:05.701 [696/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:12:05.959 [697/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:12:05.959 [698/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:12:06.218 [699/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:12:06.218 [700/707] Linking target app/dpdk-test-sad 00:12:06.218 [701/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:12:06.555 [702/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:12:06.555 [703/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:12:06.555 [704/707] Linking target app/dpdk-test-regex 00:12:06.555 [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:12:06.823 [706/707] Linking target app/dpdk-testpmd 00:12:07.083 [707/707] Linking target app/dpdk-test-security-perf 00:12:07.083 21:14:56 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:12:07.083 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:12:07.083 [0/1] Installing files. 00:12:07.347 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:12:07.347 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:12:07.347 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:12:07.347 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.348 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.349 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.350 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.351 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:12:07.352 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:12:07.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:12:07.353 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.353 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.353 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.353 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.353 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.353 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.353 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.354 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.615 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.615 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.615 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.615 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:12:07.615 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.615 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:12:07.615 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.615 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:12:07.615 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:12:07.615 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:12:07.615 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.878 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.878 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.878 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.878 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.878 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.878 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.878 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.879 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:12:07.880 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:12:07.880 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:12:07.880 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:12:07.880 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:12:07.880 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:12:07.880 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:12:07.880 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:12:07.880 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:12:07.880 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:12:07.880 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:12:07.880 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:12:07.880 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:12:07.880 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:12:07.880 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:12:07.880 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:12:07.880 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:12:07.880 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:12:07.880 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:12:07.880 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:12:07.880 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:12:07.880 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:12:07.880 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:12:07.880 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:12:07.880 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:12:07.881 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:12:07.881 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:12:07.881 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:12:07.881 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:12:07.881 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:12:07.881 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:12:07.881 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:12:07.881 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:12:07.881 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:12:07.881 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:12:07.881 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:12:07.881 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:12:07.881 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:12:07.881 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:12:07.881 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:12:07.881 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:12:07.881 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:12:07.881 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:12:07.881 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:12:07.881 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:12:07.881 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:12:07.881 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:12:07.881 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:12:07.881 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:12:07.881 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:12:07.881 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:12:07.881 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:12:07.881 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:12:07.881 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:12:07.881 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:12:07.881 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:12:07.881 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:12:07.881 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:12:07.881 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:12:07.881 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:12:07.881 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:12:07.881 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:12:07.881 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:12:07.881 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:12:07.881 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:12:07.881 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:12:07.881 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:12:07.881 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:12:07.881 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:12:07.881 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:12:07.881 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:12:07.881 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:12:07.881 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:12:07.881 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:12:07.881 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:12:07.881 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:12:07.881 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:12:07.881 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:12:07.881 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:12:07.881 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:12:07.881 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:12:07.881 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:12:07.881 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:12:07.881 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:12:07.881 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:12:07.881 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:12:07.881 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:12:07.881 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:12:07.881 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:12:07.881 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:12:07.881 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:12:07.881 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:12:07.881 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:12:07.881 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:12:07.881 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:12:07.881 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:12:07.881 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:12:07.881 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:12:07.881 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:12:07.881 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:12:07.881 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:12:07.881 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:12:07.881 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:12:07.881 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:12:07.881 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:12:07.881 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:12:07.881 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:12:07.881 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:12:07.881 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:12:07.881 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:12:07.881 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:12:07.881 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:12:07.881 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:12:07.881 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:12:07.881 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:12:07.881 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:12:07.881 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:12:07.881 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:12:07.881 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:12:07.881 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:12:07.881 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:12:07.881 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:12:07.881 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:12:07.881 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:12:07.881 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:12:07.881 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:12:07.881 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:12:07.881 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:12:07.881 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:12:07.881 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:12:07.881 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:12:07.881 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:12:07.881 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:12:07.881 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:12:07.882 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:12:07.882 21:14:56 -- common/autobuild_common.sh@189 -- $ uname -s 00:12:07.882 21:14:56 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:12:07.882 21:14:56 -- common/autobuild_common.sh@200 -- $ cat 00:12:07.882 21:14:56 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:12:07.882 00:12:07.882 real 0m48.920s 00:12:07.882 user 5m49.577s 00:12:07.882 sys 0m55.491s 00:12:07.882 21:14:56 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:12:07.882 21:14:56 -- common/autotest_common.sh@10 -- $ set +x 00:12:07.882 ************************************ 00:12:07.882 END TEST build_native_dpdk 00:12:07.882 ************************************ 00:12:07.882 21:14:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:12:07.882 21:14:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:12:07.882 21:14:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:12:07.882 21:14:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:12:07.882 21:14:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:12:07.882 21:14:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:12:07.882 21:14:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:12:07.882 21:14:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:12:08.140 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:12:08.140 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:12:08.140 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:12:08.140 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:08.707 Using 'verbs' RDMA provider 00:12:25.014 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:12:39.899 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:12:39.899 go version go1.21.1 linux/amd64 00:12:40.467 Creating mk/config.mk...done. 00:12:40.467 Creating mk/cc.flags.mk...done. 00:12:40.467 Type 'make' to build. 00:12:40.467 21:15:29 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:12:40.467 21:15:29 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:12:40.467 21:15:29 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:12:40.467 21:15:29 -- common/autotest_common.sh@10 -- $ set +x 00:12:40.467 ************************************ 00:12:40.467 START TEST make 00:12:40.467 ************************************ 00:12:40.467 21:15:29 -- common/autotest_common.sh@1111 -- $ make -j10 00:12:41.036 make[1]: Nothing to be done for 'all'. 00:13:07.590 CC lib/ut_mock/mock.o 00:13:07.590 CC lib/log/log_flags.o 00:13:07.590 CC lib/log/log.o 00:13:07.590 CC lib/log/log_deprecated.o 00:13:07.590 CC lib/ut/ut.o 00:13:07.590 LIB libspdk_ut_mock.a 00:13:07.590 LIB libspdk_ut.a 00:13:07.590 LIB libspdk_log.a 00:13:07.590 SO libspdk_ut_mock.so.6.0 00:13:07.590 SO libspdk_ut.so.2.0 00:13:07.590 SO libspdk_log.so.7.0 00:13:07.590 SYMLINK libspdk_ut_mock.so 00:13:07.590 SYMLINK libspdk_ut.so 00:13:07.590 SYMLINK libspdk_log.so 00:13:07.590 CC lib/util/base64.o 00:13:07.590 CC lib/util/bit_array.o 00:13:07.590 CC lib/util/crc32.o 00:13:07.590 CC lib/util/cpuset.o 00:13:07.590 CC lib/util/crc32c.o 00:13:07.590 CC lib/util/crc16.o 00:13:07.590 CC lib/dma/dma.o 00:13:07.590 CC lib/ioat/ioat.o 00:13:07.590 CXX lib/trace_parser/trace.o 00:13:07.590 CC lib/vfio_user/host/vfio_user_pci.o 00:13:07.590 CC lib/util/crc32_ieee.o 00:13:07.590 CC lib/util/crc64.o 00:13:07.590 CC lib/util/dif.o 00:13:07.591 CC lib/util/fd.o 00:13:07.591 CC lib/util/file.o 00:13:07.591 CC lib/util/hexlify.o 00:13:07.591 LIB libspdk_dma.a 00:13:07.591 SO libspdk_dma.so.4.0 00:13:07.591 CC lib/util/iov.o 00:13:07.591 CC lib/util/math.o 00:13:07.591 LIB libspdk_ioat.a 00:13:07.591 SO libspdk_ioat.so.7.0 00:13:07.591 CC lib/util/pipe.o 00:13:07.591 SYMLINK libspdk_dma.so 00:13:07.591 CC lib/vfio_user/host/vfio_user.o 00:13:07.591 CC lib/util/strerror_tls.o 00:13:07.591 SYMLINK libspdk_ioat.so 00:13:07.591 CC lib/util/string.o 00:13:07.591 CC lib/util/uuid.o 00:13:07.591 CC lib/util/fd_group.o 00:13:07.591 CC lib/util/xor.o 00:13:07.591 CC lib/util/zipf.o 00:13:07.591 LIB libspdk_vfio_user.a 00:13:07.591 SO libspdk_vfio_user.so.5.0 00:13:07.591 LIB libspdk_util.a 00:13:07.591 SYMLINK libspdk_vfio_user.so 00:13:07.591 SO libspdk_util.so.9.0 00:13:07.591 LIB libspdk_trace_parser.a 00:13:07.591 SYMLINK libspdk_util.so 00:13:07.591 SO libspdk_trace_parser.so.5.0 00:13:07.591 SYMLINK libspdk_trace_parser.so 00:13:07.591 CC lib/conf/conf.o 00:13:07.591 CC lib/json/json_util.o 00:13:07.591 CC lib/rdma/common.o 00:13:07.591 CC lib/json/json_parse.o 00:13:07.591 CC lib/rdma/rdma_verbs.o 00:13:07.591 CC lib/json/json_write.o 00:13:07.591 CC lib/env_dpdk/env.o 00:13:07.591 CC lib/env_dpdk/memory.o 00:13:07.591 CC lib/vmd/vmd.o 00:13:07.591 CC lib/idxd/idxd.o 00:13:07.591 LIB libspdk_conf.a 00:13:07.591 CC lib/idxd/idxd_user.o 00:13:07.591 CC lib/vmd/led.o 00:13:07.591 CC lib/env_dpdk/pci.o 00:13:07.591 SO libspdk_conf.so.6.0 00:13:07.591 LIB libspdk_rdma.a 00:13:07.591 SYMLINK libspdk_conf.so 00:13:07.591 LIB libspdk_json.a 00:13:07.591 CC lib/env_dpdk/init.o 00:13:07.591 SO libspdk_rdma.so.6.0 00:13:07.591 SO libspdk_json.so.6.0 00:13:07.591 SYMLINK libspdk_rdma.so 00:13:07.591 CC lib/env_dpdk/threads.o 00:13:07.591 CC lib/env_dpdk/pci_ioat.o 00:13:07.591 SYMLINK libspdk_json.so 00:13:07.591 CC lib/env_dpdk/pci_virtio.o 00:13:07.591 CC lib/env_dpdk/pci_vmd.o 00:13:07.591 CC lib/env_dpdk/pci_idxd.o 00:13:07.591 CC lib/env_dpdk/pci_event.o 00:13:07.591 LIB libspdk_idxd.a 00:13:07.591 CC lib/jsonrpc/jsonrpc_server.o 00:13:07.591 CC lib/env_dpdk/sigbus_handler.o 00:13:07.591 SO libspdk_idxd.so.12.0 00:13:07.591 LIB libspdk_vmd.a 00:13:07.591 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:07.591 CC lib/env_dpdk/pci_dpdk.o 00:13:07.591 SO libspdk_vmd.so.6.0 00:13:07.591 SYMLINK libspdk_idxd.so 00:13:07.591 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:07.591 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:07.591 CC lib/jsonrpc/jsonrpc_client.o 00:13:07.591 SYMLINK libspdk_vmd.so 00:13:07.591 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:07.591 LIB libspdk_jsonrpc.a 00:13:07.591 SO libspdk_jsonrpc.so.6.0 00:13:07.591 SYMLINK libspdk_jsonrpc.so 00:13:07.591 CC lib/rpc/rpc.o 00:13:07.591 LIB libspdk_env_dpdk.a 00:13:07.591 SO libspdk_env_dpdk.so.14.0 00:13:07.591 LIB libspdk_rpc.a 00:13:07.591 SO libspdk_rpc.so.6.0 00:13:07.591 SYMLINK libspdk_rpc.so 00:13:07.591 SYMLINK libspdk_env_dpdk.so 00:13:07.591 CC lib/trace/trace.o 00:13:07.591 CC lib/trace/trace_rpc.o 00:13:07.591 CC lib/trace/trace_flags.o 00:13:07.591 CC lib/notify/notify.o 00:13:07.591 CC lib/notify/notify_rpc.o 00:13:07.591 CC lib/keyring/keyring_rpc.o 00:13:07.591 CC lib/keyring/keyring.o 00:13:07.591 LIB libspdk_keyring.a 00:13:07.591 LIB libspdk_notify.a 00:13:07.591 LIB libspdk_trace.a 00:13:07.591 SO libspdk_keyring.so.1.0 00:13:07.591 SO libspdk_notify.so.6.0 00:13:07.591 SO libspdk_trace.so.10.0 00:13:07.591 SYMLINK libspdk_keyring.so 00:13:07.591 SYMLINK libspdk_notify.so 00:13:07.591 SYMLINK libspdk_trace.so 00:13:08.159 CC lib/thread/thread.o 00:13:08.159 CC lib/thread/iobuf.o 00:13:08.159 CC lib/sock/sock.o 00:13:08.159 CC lib/sock/sock_rpc.o 00:13:08.418 LIB libspdk_sock.a 00:13:08.418 SO libspdk_sock.so.9.0 00:13:08.418 SYMLINK libspdk_sock.so 00:13:08.985 CC lib/nvme/nvme_ctrlr_cmd.o 00:13:08.985 CC lib/nvme/nvme_ctrlr.o 00:13:08.985 CC lib/nvme/nvme_ns.o 00:13:08.985 CC lib/nvme/nvme_fabric.o 00:13:08.985 CC lib/nvme/nvme_ns_cmd.o 00:13:08.985 CC lib/nvme/nvme_qpair.o 00:13:08.985 CC lib/nvme/nvme_pcie_common.o 00:13:08.985 CC lib/nvme/nvme_pcie.o 00:13:08.985 CC lib/nvme/nvme.o 00:13:09.243 LIB libspdk_thread.a 00:13:09.502 SO libspdk_thread.so.10.0 00:13:09.502 SYMLINK libspdk_thread.so 00:13:09.502 CC lib/nvme/nvme_quirks.o 00:13:09.502 CC lib/nvme/nvme_transport.o 00:13:09.502 CC lib/nvme/nvme_discovery.o 00:13:09.502 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:13:09.502 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:13:09.761 CC lib/nvme/nvme_tcp.o 00:13:09.761 CC lib/nvme/nvme_opal.o 00:13:09.761 CC lib/nvme/nvme_io_msg.o 00:13:09.761 CC lib/nvme/nvme_poll_group.o 00:13:10.019 CC lib/nvme/nvme_zns.o 00:13:10.019 CC lib/nvme/nvme_stubs.o 00:13:10.019 CC lib/nvme/nvme_auth.o 00:13:10.276 CC lib/nvme/nvme_cuse.o 00:13:10.276 CC lib/nvme/nvme_rdma.o 00:13:10.276 CC lib/accel/accel.o 00:13:10.535 CC lib/blob/blobstore.o 00:13:10.535 CC lib/init/json_config.o 00:13:10.535 CC lib/init/subsystem.o 00:13:10.793 CC lib/init/subsystem_rpc.o 00:13:10.793 CC lib/init/rpc.o 00:13:10.793 CC lib/accel/accel_rpc.o 00:13:11.051 CC lib/blob/request.o 00:13:11.051 LIB libspdk_init.a 00:13:11.051 CC lib/accel/accel_sw.o 00:13:11.051 SO libspdk_init.so.5.0 00:13:11.051 CC lib/blob/zeroes.o 00:13:11.051 CC lib/blob/blob_bs_dev.o 00:13:11.051 CC lib/virtio/virtio.o 00:13:11.051 CC lib/virtio/virtio_vhost_user.o 00:13:11.309 SYMLINK libspdk_init.so 00:13:11.309 CC lib/virtio/virtio_vfio_user.o 00:13:11.309 CC lib/virtio/virtio_pci.o 00:13:11.309 LIB libspdk_accel.a 00:13:11.309 CC lib/event/reactor.o 00:13:11.309 CC lib/event/log_rpc.o 00:13:11.309 CC lib/event/app.o 00:13:11.309 CC lib/event/app_rpc.o 00:13:11.309 SO libspdk_accel.so.15.0 00:13:11.566 CC lib/event/scheduler_static.o 00:13:11.566 SYMLINK libspdk_accel.so 00:13:11.566 LIB libspdk_virtio.a 00:13:11.567 SO libspdk_virtio.so.7.0 00:13:11.567 LIB libspdk_nvme.a 00:13:11.567 SYMLINK libspdk_virtio.so 00:13:11.824 CC lib/bdev/bdev.o 00:13:11.824 CC lib/bdev/bdev_zone.o 00:13:11.824 CC lib/bdev/bdev_rpc.o 00:13:11.824 CC lib/bdev/part.o 00:13:11.824 CC lib/bdev/scsi_nvme.o 00:13:11.824 SO libspdk_nvme.so.13.0 00:13:11.824 LIB libspdk_event.a 00:13:12.083 SO libspdk_event.so.13.0 00:13:12.083 SYMLINK libspdk_event.so 00:13:12.083 SYMLINK libspdk_nvme.so 00:13:13.461 LIB libspdk_blob.a 00:13:13.461 SO libspdk_blob.so.11.0 00:13:13.461 SYMLINK libspdk_blob.so 00:13:13.719 CC lib/blobfs/blobfs.o 00:13:13.719 CC lib/blobfs/tree.o 00:13:13.719 CC lib/lvol/lvol.o 00:13:14.286 LIB libspdk_bdev.a 00:13:14.286 SO libspdk_bdev.so.15.0 00:13:14.286 SYMLINK libspdk_bdev.so 00:13:14.545 LIB libspdk_blobfs.a 00:13:14.545 SO libspdk_blobfs.so.10.0 00:13:14.545 CC lib/ftl/ftl_core.o 00:13:14.545 CC lib/ftl/ftl_init.o 00:13:14.545 CC lib/ftl/ftl_layout.o 00:13:14.545 CC lib/ftl/ftl_debug.o 00:13:14.545 CC lib/scsi/dev.o 00:13:14.545 CC lib/ublk/ublk.o 00:13:14.545 CC lib/nvmf/ctrlr.o 00:13:14.545 CC lib/nbd/nbd.o 00:13:14.545 LIB libspdk_lvol.a 00:13:14.545 SYMLINK libspdk_blobfs.so 00:13:14.545 CC lib/ublk/ublk_rpc.o 00:13:14.545 SO libspdk_lvol.so.10.0 00:13:14.839 SYMLINK libspdk_lvol.so 00:13:14.839 CC lib/ftl/ftl_io.o 00:13:14.839 CC lib/ftl/ftl_sb.o 00:13:14.839 CC lib/nvmf/ctrlr_discovery.o 00:13:14.839 CC lib/ftl/ftl_l2p.o 00:13:14.839 CC lib/scsi/lun.o 00:13:14.839 CC lib/ftl/ftl_l2p_flat.o 00:13:14.839 CC lib/ftl/ftl_nv_cache.o 00:13:14.839 CC lib/ftl/ftl_band.o 00:13:15.097 CC lib/nbd/nbd_rpc.o 00:13:15.097 CC lib/ftl/ftl_band_ops.o 00:13:15.097 CC lib/ftl/ftl_writer.o 00:13:15.097 CC lib/ftl/ftl_rq.o 00:13:15.097 CC lib/scsi/port.o 00:13:15.097 LIB libspdk_ublk.a 00:13:15.097 LIB libspdk_nbd.a 00:13:15.097 SO libspdk_ublk.so.3.0 00:13:15.097 SO libspdk_nbd.so.7.0 00:13:15.355 SYMLINK libspdk_ublk.so 00:13:15.355 CC lib/ftl/ftl_reloc.o 00:13:15.355 CC lib/nvmf/ctrlr_bdev.o 00:13:15.355 CC lib/scsi/scsi.o 00:13:15.355 SYMLINK libspdk_nbd.so 00:13:15.355 CC lib/nvmf/subsystem.o 00:13:15.355 CC lib/ftl/ftl_l2p_cache.o 00:13:15.355 CC lib/nvmf/nvmf.o 00:13:15.355 CC lib/ftl/ftl_p2l.o 00:13:15.355 CC lib/ftl/mngt/ftl_mngt.o 00:13:15.355 CC lib/scsi/scsi_bdev.o 00:13:15.614 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:13:15.614 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:13:15.614 CC lib/nvmf/nvmf_rpc.o 00:13:15.872 CC lib/ftl/mngt/ftl_mngt_startup.o 00:13:15.872 CC lib/nvmf/transport.o 00:13:15.872 CC lib/scsi/scsi_pr.o 00:13:15.872 CC lib/ftl/mngt/ftl_mngt_md.o 00:13:15.872 CC lib/nvmf/tcp.o 00:13:15.872 CC lib/scsi/scsi_rpc.o 00:13:15.872 CC lib/nvmf/rdma.o 00:13:16.131 CC lib/scsi/task.o 00:13:16.131 CC lib/ftl/mngt/ftl_mngt_misc.o 00:13:16.131 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:13:16.131 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:13:16.131 CC lib/ftl/mngt/ftl_mngt_band.o 00:13:16.389 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:13:16.389 LIB libspdk_scsi.a 00:13:16.389 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:13:16.389 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:13:16.389 SO libspdk_scsi.so.9.0 00:13:16.389 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:13:16.389 CC lib/ftl/utils/ftl_conf.o 00:13:16.389 CC lib/ftl/utils/ftl_md.o 00:13:16.389 CC lib/ftl/utils/ftl_mempool.o 00:13:16.389 SYMLINK libspdk_scsi.so 00:13:16.389 CC lib/ftl/utils/ftl_bitmap.o 00:13:16.648 CC lib/ftl/utils/ftl_property.o 00:13:16.648 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:13:16.648 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:13:16.648 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:13:16.648 CC lib/iscsi/conn.o 00:13:16.648 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:13:16.648 CC lib/vhost/vhost.o 00:13:16.648 CC lib/iscsi/init_grp.o 00:13:16.906 CC lib/iscsi/iscsi.o 00:13:16.906 CC lib/iscsi/md5.o 00:13:16.906 CC lib/iscsi/param.o 00:13:16.906 CC lib/iscsi/portal_grp.o 00:13:16.906 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:13:16.906 CC lib/iscsi/tgt_node.o 00:13:16.906 CC lib/iscsi/iscsi_subsystem.o 00:13:17.164 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:13:17.164 CC lib/iscsi/iscsi_rpc.o 00:13:17.164 CC lib/ftl/upgrade/ftl_sb_v3.o 00:13:17.164 CC lib/ftl/upgrade/ftl_sb_v5.o 00:13:17.164 CC lib/ftl/nvc/ftl_nvc_dev.o 00:13:17.423 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:13:17.423 CC lib/ftl/base/ftl_base_dev.o 00:13:17.423 CC lib/iscsi/task.o 00:13:17.423 CC lib/vhost/vhost_rpc.o 00:13:17.423 CC lib/vhost/vhost_scsi.o 00:13:17.423 CC lib/ftl/base/ftl_base_bdev.o 00:13:17.423 CC lib/ftl/ftl_trace.o 00:13:17.423 CC lib/vhost/vhost_blk.o 00:13:17.423 CC lib/vhost/rte_vhost_user.o 00:13:17.681 LIB libspdk_ftl.a 00:13:17.940 SO libspdk_ftl.so.9.0 00:13:17.940 LIB libspdk_nvmf.a 00:13:17.940 LIB libspdk_iscsi.a 00:13:18.200 SO libspdk_nvmf.so.18.0 00:13:18.200 SO libspdk_iscsi.so.8.0 00:13:18.200 SYMLINK libspdk_ftl.so 00:13:18.200 SYMLINK libspdk_nvmf.so 00:13:18.460 SYMLINK libspdk_iscsi.so 00:13:18.460 LIB libspdk_vhost.a 00:13:18.720 SO libspdk_vhost.so.8.0 00:13:18.720 SYMLINK libspdk_vhost.so 00:13:18.979 CC module/env_dpdk/env_dpdk_rpc.o 00:13:19.238 CC module/sock/posix/posix.o 00:13:19.238 CC module/scheduler/dynamic/scheduler_dynamic.o 00:13:19.238 CC module/accel/ioat/accel_ioat.o 00:13:19.238 CC module/keyring/file/keyring.o 00:13:19.238 CC module/accel/error/accel_error.o 00:13:19.238 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:13:19.238 CC module/scheduler/gscheduler/gscheduler.o 00:13:19.238 CC module/accel/dsa/accel_dsa.o 00:13:19.238 CC module/blob/bdev/blob_bdev.o 00:13:19.238 LIB libspdk_env_dpdk_rpc.a 00:13:19.238 SO libspdk_env_dpdk_rpc.so.6.0 00:13:19.238 SYMLINK libspdk_env_dpdk_rpc.so 00:13:19.238 CC module/keyring/file/keyring_rpc.o 00:13:19.238 CC module/accel/ioat/accel_ioat_rpc.o 00:13:19.238 LIB libspdk_scheduler_gscheduler.a 00:13:19.238 LIB libspdk_scheduler_dpdk_governor.a 00:13:19.238 CC module/accel/error/accel_error_rpc.o 00:13:19.238 SO libspdk_scheduler_gscheduler.so.4.0 00:13:19.238 SO libspdk_scheduler_dpdk_governor.so.4.0 00:13:19.238 LIB libspdk_scheduler_dynamic.a 00:13:19.238 SO libspdk_scheduler_dynamic.so.4.0 00:13:19.497 SYMLINK libspdk_scheduler_gscheduler.so 00:13:19.497 SYMLINK libspdk_scheduler_dpdk_governor.so 00:13:19.497 CC module/accel/dsa/accel_dsa_rpc.o 00:13:19.497 SYMLINK libspdk_scheduler_dynamic.so 00:13:19.497 LIB libspdk_blob_bdev.a 00:13:19.497 LIB libspdk_keyring_file.a 00:13:19.497 LIB libspdk_accel_ioat.a 00:13:19.497 SO libspdk_blob_bdev.so.11.0 00:13:19.497 SO libspdk_accel_ioat.so.6.0 00:13:19.497 SO libspdk_keyring_file.so.1.0 00:13:19.497 LIB libspdk_accel_error.a 00:13:19.497 CC module/accel/iaa/accel_iaa.o 00:13:19.497 CC module/accel/iaa/accel_iaa_rpc.o 00:13:19.497 SO libspdk_accel_error.so.2.0 00:13:19.497 SYMLINK libspdk_blob_bdev.so 00:13:19.497 SYMLINK libspdk_accel_ioat.so 00:13:19.497 SYMLINK libspdk_keyring_file.so 00:13:19.497 LIB libspdk_accel_dsa.a 00:13:19.497 SYMLINK libspdk_accel_error.so 00:13:19.497 SO libspdk_accel_dsa.so.5.0 00:13:19.754 SYMLINK libspdk_accel_dsa.so 00:13:19.754 LIB libspdk_accel_iaa.a 00:13:19.754 SO libspdk_accel_iaa.so.3.0 00:13:19.754 CC module/bdev/delay/vbdev_delay.o 00:13:19.754 CC module/bdev/error/vbdev_error.o 00:13:19.754 CC module/bdev/null/bdev_null.o 00:13:19.754 CC module/blobfs/bdev/blobfs_bdev.o 00:13:19.754 CC module/bdev/malloc/bdev_malloc.o 00:13:19.754 LIB libspdk_sock_posix.a 00:13:19.754 CC module/bdev/gpt/gpt.o 00:13:19.754 CC module/bdev/lvol/vbdev_lvol.o 00:13:19.754 CC module/bdev/nvme/bdev_nvme.o 00:13:19.754 SYMLINK libspdk_accel_iaa.so 00:13:19.754 CC module/bdev/error/vbdev_error_rpc.o 00:13:19.754 SO libspdk_sock_posix.so.6.0 00:13:20.040 SYMLINK libspdk_sock_posix.so 00:13:20.040 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:13:20.040 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:13:20.040 CC module/bdev/gpt/vbdev_gpt.o 00:13:20.040 CC module/bdev/null/bdev_null_rpc.o 00:13:20.040 LIB libspdk_bdev_error.a 00:13:20.040 CC module/bdev/malloc/bdev_malloc_rpc.o 00:13:20.040 SO libspdk_bdev_error.so.6.0 00:13:20.298 LIB libspdk_blobfs_bdev.a 00:13:20.298 CC module/bdev/delay/vbdev_delay_rpc.o 00:13:20.298 CC module/bdev/nvme/bdev_nvme_rpc.o 00:13:20.298 SYMLINK libspdk_bdev_error.so 00:13:20.298 SO libspdk_blobfs_bdev.so.6.0 00:13:20.298 LIB libspdk_bdev_null.a 00:13:20.298 SYMLINK libspdk_blobfs_bdev.so 00:13:20.298 SO libspdk_bdev_null.so.6.0 00:13:20.298 LIB libspdk_bdev_lvol.a 00:13:20.298 LIB libspdk_bdev_malloc.a 00:13:20.298 LIB libspdk_bdev_gpt.a 00:13:20.298 LIB libspdk_bdev_delay.a 00:13:20.298 SO libspdk_bdev_malloc.so.6.0 00:13:20.298 SO libspdk_bdev_lvol.so.6.0 00:13:20.298 SO libspdk_bdev_gpt.so.6.0 00:13:20.298 SYMLINK libspdk_bdev_null.so 00:13:20.298 SO libspdk_bdev_delay.so.6.0 00:13:20.298 SYMLINK libspdk_bdev_gpt.so 00:13:20.298 CC module/bdev/passthru/vbdev_passthru.o 00:13:20.298 SYMLINK libspdk_bdev_malloc.so 00:13:20.298 CC module/bdev/nvme/nvme_rpc.o 00:13:20.298 CC module/bdev/nvme/bdev_mdns_client.o 00:13:20.298 SYMLINK libspdk_bdev_lvol.so 00:13:20.557 SYMLINK libspdk_bdev_delay.so 00:13:20.557 CC module/bdev/raid/bdev_raid.o 00:13:20.557 CC module/bdev/split/vbdev_split.o 00:13:20.557 CC module/bdev/zone_block/vbdev_zone_block.o 00:13:20.557 CC module/bdev/aio/bdev_aio.o 00:13:20.557 CC module/bdev/ftl/bdev_ftl.o 00:13:20.557 CC module/bdev/aio/bdev_aio_rpc.o 00:13:20.816 CC module/bdev/ftl/bdev_ftl_rpc.o 00:13:20.816 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:13:20.816 CC module/bdev/split/vbdev_split_rpc.o 00:13:20.816 CC module/bdev/nvme/vbdev_opal.o 00:13:20.816 CC module/bdev/nvme/vbdev_opal_rpc.o 00:13:20.816 LIB libspdk_bdev_passthru.a 00:13:20.816 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:13:20.816 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:13:20.816 SO libspdk_bdev_passthru.so.6.0 00:13:20.816 LIB libspdk_bdev_aio.a 00:13:20.816 LIB libspdk_bdev_ftl.a 00:13:20.816 LIB libspdk_bdev_split.a 00:13:20.816 SO libspdk_bdev_aio.so.6.0 00:13:20.816 SO libspdk_bdev_ftl.so.6.0 00:13:21.074 SO libspdk_bdev_split.so.6.0 00:13:21.074 SYMLINK libspdk_bdev_passthru.so 00:13:21.074 CC module/bdev/raid/bdev_raid_rpc.o 00:13:21.074 SYMLINK libspdk_bdev_aio.so 00:13:21.074 SYMLINK libspdk_bdev_ftl.so 00:13:21.074 CC module/bdev/raid/bdev_raid_sb.o 00:13:21.074 SYMLINK libspdk_bdev_split.so 00:13:21.074 CC module/bdev/raid/raid0.o 00:13:21.074 CC module/bdev/raid/raid1.o 00:13:21.074 CC module/bdev/raid/concat.o 00:13:21.074 LIB libspdk_bdev_zone_block.a 00:13:21.074 SO libspdk_bdev_zone_block.so.6.0 00:13:21.074 SYMLINK libspdk_bdev_zone_block.so 00:13:21.074 CC module/bdev/virtio/bdev_virtio_scsi.o 00:13:21.074 CC module/bdev/virtio/bdev_virtio_blk.o 00:13:21.074 CC module/bdev/virtio/bdev_virtio_rpc.o 00:13:21.074 CC module/bdev/iscsi/bdev_iscsi.o 00:13:21.332 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:13:21.332 LIB libspdk_bdev_raid.a 00:13:21.332 SO libspdk_bdev_raid.so.6.0 00:13:21.591 SYMLINK libspdk_bdev_raid.so 00:13:21.591 LIB libspdk_bdev_iscsi.a 00:13:21.591 SO libspdk_bdev_iscsi.so.6.0 00:13:21.591 SYMLINK libspdk_bdev_iscsi.so 00:13:21.591 LIB libspdk_bdev_virtio.a 00:13:21.591 SO libspdk_bdev_virtio.so.6.0 00:13:21.850 SYMLINK libspdk_bdev_virtio.so 00:13:21.850 LIB libspdk_bdev_nvme.a 00:13:21.850 SO libspdk_bdev_nvme.so.7.0 00:13:22.108 SYMLINK libspdk_bdev_nvme.so 00:13:22.674 CC module/event/subsystems/vmd/vmd_rpc.o 00:13:22.674 CC module/event/subsystems/vmd/vmd.o 00:13:22.674 CC module/event/subsystems/scheduler/scheduler.o 00:13:22.674 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:13:22.674 CC module/event/subsystems/iobuf/iobuf.o 00:13:22.674 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:13:22.674 CC module/event/subsystems/sock/sock.o 00:13:22.674 CC module/event/subsystems/keyring/keyring.o 00:13:22.674 LIB libspdk_event_vmd.a 00:13:22.674 LIB libspdk_event_keyring.a 00:13:22.674 LIB libspdk_event_iobuf.a 00:13:22.674 LIB libspdk_event_sock.a 00:13:22.674 SO libspdk_event_vmd.so.6.0 00:13:22.674 SO libspdk_event_keyring.so.1.0 00:13:22.932 SO libspdk_event_iobuf.so.3.0 00:13:22.932 LIB libspdk_event_vhost_blk.a 00:13:22.932 LIB libspdk_event_scheduler.a 00:13:22.932 SO libspdk_event_sock.so.5.0 00:13:22.932 SO libspdk_event_vhost_blk.so.3.0 00:13:22.932 SYMLINK libspdk_event_vmd.so 00:13:22.932 SO libspdk_event_scheduler.so.4.0 00:13:22.932 SYMLINK libspdk_event_keyring.so 00:13:22.932 SYMLINK libspdk_event_iobuf.so 00:13:22.932 SYMLINK libspdk_event_sock.so 00:13:22.932 SYMLINK libspdk_event_scheduler.so 00:13:22.932 SYMLINK libspdk_event_vhost_blk.so 00:13:23.191 CC module/event/subsystems/accel/accel.o 00:13:23.450 LIB libspdk_event_accel.a 00:13:23.450 SO libspdk_event_accel.so.6.0 00:13:23.450 SYMLINK libspdk_event_accel.so 00:13:23.709 CC module/event/subsystems/bdev/bdev.o 00:13:23.966 LIB libspdk_event_bdev.a 00:13:23.966 SO libspdk_event_bdev.so.6.0 00:13:24.225 SYMLINK libspdk_event_bdev.so 00:13:24.484 CC module/event/subsystems/scsi/scsi.o 00:13:24.484 CC module/event/subsystems/ublk/ublk.o 00:13:24.484 CC module/event/subsystems/nbd/nbd.o 00:13:24.484 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:13:24.484 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:13:24.484 LIB libspdk_event_ublk.a 00:13:24.484 LIB libspdk_event_scsi.a 00:13:24.484 LIB libspdk_event_nbd.a 00:13:24.484 SO libspdk_event_ublk.so.3.0 00:13:24.484 SO libspdk_event_scsi.so.6.0 00:13:24.484 SO libspdk_event_nbd.so.6.0 00:13:24.784 SYMLINK libspdk_event_ublk.so 00:13:24.784 SYMLINK libspdk_event_scsi.so 00:13:24.784 LIB libspdk_event_nvmf.a 00:13:24.784 SYMLINK libspdk_event_nbd.so 00:13:24.784 SO libspdk_event_nvmf.so.6.0 00:13:24.784 SYMLINK libspdk_event_nvmf.so 00:13:25.044 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:13:25.044 CC module/event/subsystems/iscsi/iscsi.o 00:13:25.044 LIB libspdk_event_vhost_scsi.a 00:13:25.303 LIB libspdk_event_iscsi.a 00:13:25.303 SO libspdk_event_vhost_scsi.so.3.0 00:13:25.303 SO libspdk_event_iscsi.so.6.0 00:13:25.303 SYMLINK libspdk_event_vhost_scsi.so 00:13:25.303 SYMLINK libspdk_event_iscsi.so 00:13:25.564 SO libspdk.so.6.0 00:13:25.564 SYMLINK libspdk.so 00:13:25.823 TEST_HEADER include/spdk/accel.h 00:13:25.823 TEST_HEADER include/spdk/accel_module.h 00:13:25.823 CXX app/trace/trace.o 00:13:25.823 TEST_HEADER include/spdk/assert.h 00:13:25.823 TEST_HEADER include/spdk/barrier.h 00:13:25.823 TEST_HEADER include/spdk/base64.h 00:13:25.823 TEST_HEADER include/spdk/bdev.h 00:13:25.823 TEST_HEADER include/spdk/bdev_module.h 00:13:25.823 TEST_HEADER include/spdk/bdev_zone.h 00:13:25.823 TEST_HEADER include/spdk/bit_array.h 00:13:25.823 TEST_HEADER include/spdk/bit_pool.h 00:13:25.823 TEST_HEADER include/spdk/blob_bdev.h 00:13:25.823 TEST_HEADER include/spdk/blobfs_bdev.h 00:13:25.823 TEST_HEADER include/spdk/blobfs.h 00:13:25.823 TEST_HEADER include/spdk/blob.h 00:13:25.823 TEST_HEADER include/spdk/conf.h 00:13:25.823 TEST_HEADER include/spdk/config.h 00:13:25.823 TEST_HEADER include/spdk/cpuset.h 00:13:25.823 TEST_HEADER include/spdk/crc16.h 00:13:25.823 TEST_HEADER include/spdk/crc32.h 00:13:25.823 TEST_HEADER include/spdk/crc64.h 00:13:25.823 TEST_HEADER include/spdk/dif.h 00:13:25.823 TEST_HEADER include/spdk/dma.h 00:13:25.823 TEST_HEADER include/spdk/endian.h 00:13:25.823 TEST_HEADER include/spdk/env_dpdk.h 00:13:25.823 TEST_HEADER include/spdk/env.h 00:13:25.823 TEST_HEADER include/spdk/event.h 00:13:25.823 TEST_HEADER include/spdk/fd_group.h 00:13:25.823 TEST_HEADER include/spdk/fd.h 00:13:25.823 TEST_HEADER include/spdk/file.h 00:13:25.823 TEST_HEADER include/spdk/ftl.h 00:13:25.823 TEST_HEADER include/spdk/gpt_spec.h 00:13:25.823 TEST_HEADER include/spdk/hexlify.h 00:13:25.823 TEST_HEADER include/spdk/histogram_data.h 00:13:25.823 TEST_HEADER include/spdk/idxd.h 00:13:25.823 TEST_HEADER include/spdk/idxd_spec.h 00:13:25.823 TEST_HEADER include/spdk/init.h 00:13:25.823 TEST_HEADER include/spdk/ioat.h 00:13:25.823 TEST_HEADER include/spdk/ioat_spec.h 00:13:25.823 TEST_HEADER include/spdk/iscsi_spec.h 00:13:25.823 TEST_HEADER include/spdk/json.h 00:13:25.823 TEST_HEADER include/spdk/jsonrpc.h 00:13:25.823 TEST_HEADER include/spdk/keyring.h 00:13:25.823 TEST_HEADER include/spdk/keyring_module.h 00:13:25.823 TEST_HEADER include/spdk/likely.h 00:13:25.823 TEST_HEADER include/spdk/log.h 00:13:25.823 TEST_HEADER include/spdk/lvol.h 00:13:25.823 CC examples/accel/perf/accel_perf.o 00:13:25.823 TEST_HEADER include/spdk/memory.h 00:13:25.823 CC test/event/event_perf/event_perf.o 00:13:25.823 TEST_HEADER include/spdk/mmio.h 00:13:25.823 TEST_HEADER include/spdk/nbd.h 00:13:25.823 TEST_HEADER include/spdk/notify.h 00:13:25.823 TEST_HEADER include/spdk/nvme.h 00:13:25.823 TEST_HEADER include/spdk/nvme_intel.h 00:13:25.823 TEST_HEADER include/spdk/nvme_ocssd.h 00:13:25.823 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:13:25.823 TEST_HEADER include/spdk/nvme_spec.h 00:13:25.823 TEST_HEADER include/spdk/nvme_zns.h 00:13:25.823 TEST_HEADER include/spdk/nvmf_cmd.h 00:13:25.823 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:13:25.823 TEST_HEADER include/spdk/nvmf.h 00:13:25.823 CC test/bdev/bdevio/bdevio.o 00:13:25.823 TEST_HEADER include/spdk/nvmf_spec.h 00:13:25.823 CC test/accel/dif/dif.o 00:13:25.823 TEST_HEADER include/spdk/nvmf_transport.h 00:13:25.823 TEST_HEADER include/spdk/opal.h 00:13:25.823 TEST_HEADER include/spdk/opal_spec.h 00:13:25.823 CC test/app/bdev_svc/bdev_svc.o 00:13:25.823 TEST_HEADER include/spdk/pci_ids.h 00:13:25.823 CC test/dma/test_dma/test_dma.o 00:13:25.823 TEST_HEADER include/spdk/pipe.h 00:13:25.823 TEST_HEADER include/spdk/queue.h 00:13:25.823 TEST_HEADER include/spdk/reduce.h 00:13:25.823 TEST_HEADER include/spdk/rpc.h 00:13:25.823 TEST_HEADER include/spdk/scheduler.h 00:13:25.823 TEST_HEADER include/spdk/scsi.h 00:13:25.823 CC test/blobfs/mkfs/mkfs.o 00:13:25.823 TEST_HEADER include/spdk/scsi_spec.h 00:13:25.823 TEST_HEADER include/spdk/sock.h 00:13:25.823 TEST_HEADER include/spdk/stdinc.h 00:13:25.823 TEST_HEADER include/spdk/string.h 00:13:25.823 TEST_HEADER include/spdk/thread.h 00:13:25.823 TEST_HEADER include/spdk/trace.h 00:13:25.823 TEST_HEADER include/spdk/trace_parser.h 00:13:25.823 TEST_HEADER include/spdk/tree.h 00:13:25.823 TEST_HEADER include/spdk/ublk.h 00:13:25.823 TEST_HEADER include/spdk/util.h 00:13:26.081 TEST_HEADER include/spdk/uuid.h 00:13:26.081 CC test/env/mem_callbacks/mem_callbacks.o 00:13:26.081 TEST_HEADER include/spdk/version.h 00:13:26.081 TEST_HEADER include/spdk/vfio_user_pci.h 00:13:26.081 TEST_HEADER include/spdk/vfio_user_spec.h 00:13:26.081 TEST_HEADER include/spdk/vhost.h 00:13:26.081 TEST_HEADER include/spdk/vmd.h 00:13:26.081 TEST_HEADER include/spdk/xor.h 00:13:26.081 TEST_HEADER include/spdk/zipf.h 00:13:26.081 CXX test/cpp_headers/accel.o 00:13:26.081 LINK event_perf 00:13:26.081 LINK bdev_svc 00:13:26.081 LINK mkfs 00:13:26.081 CXX test/cpp_headers/accel_module.o 00:13:26.081 LINK spdk_trace 00:13:26.339 LINK bdevio 00:13:26.339 CC test/event/reactor/reactor.o 00:13:26.339 LINK accel_perf 00:13:26.339 CXX test/cpp_headers/assert.o 00:13:26.339 LINK dif 00:13:26.339 LINK test_dma 00:13:26.339 LINK reactor 00:13:26.339 CC test/app/histogram_perf/histogram_perf.o 00:13:26.598 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:13:26.598 CXX test/cpp_headers/barrier.o 00:13:26.598 CC app/trace_record/trace_record.o 00:13:26.598 CXX test/cpp_headers/base64.o 00:13:26.598 CC test/app/jsoncat/jsoncat.o 00:13:26.598 CXX test/cpp_headers/bdev.o 00:13:26.598 LINK mem_callbacks 00:13:26.599 LINK histogram_perf 00:13:26.599 CC test/event/reactor_perf/reactor_perf.o 00:13:26.599 LINK jsoncat 00:13:26.599 CC examples/bdev/hello_world/hello_bdev.o 00:13:26.857 LINK spdk_trace_record 00:13:26.857 CXX test/cpp_headers/bdev_module.o 00:13:26.857 LINK reactor_perf 00:13:26.857 CC test/env/vtophys/vtophys.o 00:13:26.857 CC test/nvme/aer/aer.o 00:13:26.857 LINK nvme_fuzz 00:13:26.857 CC test/rpc_client/rpc_client_test.o 00:13:26.857 CC test/lvol/esnap/esnap.o 00:13:26.857 CXX test/cpp_headers/bdev_zone.o 00:13:26.857 LINK hello_bdev 00:13:26.857 LINK vtophys 00:13:27.115 CC app/nvmf_tgt/nvmf_main.o 00:13:27.115 CC test/thread/poller_perf/poller_perf.o 00:13:27.115 LINK rpc_client_test 00:13:27.115 CC test/event/app_repeat/app_repeat.o 00:13:27.115 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:13:27.115 CXX test/cpp_headers/bit_array.o 00:13:27.115 LINK aer 00:13:27.115 LINK poller_perf 00:13:27.115 LINK nvmf_tgt 00:13:27.115 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:27.115 LINK app_repeat 00:13:27.373 CXX test/cpp_headers/bit_pool.o 00:13:27.373 CC examples/bdev/bdevperf/bdevperf.o 00:13:27.373 CXX test/cpp_headers/blob_bdev.o 00:13:27.373 CC test/env/memory/memory_ut.o 00:13:27.373 CC test/nvme/reset/reset.o 00:13:27.373 LINK env_dpdk_post_init 00:13:27.630 CC test/env/pci/pci_ut.o 00:13:27.630 CXX test/cpp_headers/blobfs_bdev.o 00:13:27.630 CC app/iscsi_tgt/iscsi_tgt.o 00:13:27.630 CC test/event/scheduler/scheduler.o 00:13:27.630 CXX test/cpp_headers/blobfs.o 00:13:27.630 LINK reset 00:13:27.630 CXX test/cpp_headers/blob.o 00:13:27.630 LINK iscsi_tgt 00:13:27.888 LINK scheduler 00:13:27.888 CXX test/cpp_headers/conf.o 00:13:27.888 LINK pci_ut 00:13:27.888 CC test/nvme/sgl/sgl.o 00:13:27.888 CC examples/blob/hello_world/hello_blob.o 00:13:27.888 CXX test/cpp_headers/config.o 00:13:28.146 CXX test/cpp_headers/cpuset.o 00:13:28.146 LINK bdevperf 00:13:28.146 CC test/nvme/e2edp/nvme_dp.o 00:13:28.146 CC app/spdk_tgt/spdk_tgt.o 00:13:28.146 LINK hello_blob 00:13:28.146 LINK sgl 00:13:28.146 LINK memory_ut 00:13:28.146 CXX test/cpp_headers/crc16.o 00:13:28.146 CC test/nvme/overhead/overhead.o 00:13:28.404 LINK spdk_tgt 00:13:28.404 LINK nvme_dp 00:13:28.404 CC test/nvme/err_injection/err_injection.o 00:13:28.404 CXX test/cpp_headers/crc32.o 00:13:28.404 CC examples/blob/cli/blobcli.o 00:13:28.404 LINK overhead 00:13:28.404 CXX test/cpp_headers/crc64.o 00:13:28.404 CXX test/cpp_headers/dif.o 00:13:28.663 LINK err_injection 00:13:28.663 CC examples/ioat/perf/perf.o 00:13:28.663 CC examples/nvme/hello_world/hello_world.o 00:13:28.663 LINK iscsi_fuzz 00:13:28.663 CC app/spdk_lspci/spdk_lspci.o 00:13:28.663 CXX test/cpp_headers/dma.o 00:13:28.663 CXX test/cpp_headers/endian.o 00:13:28.923 CC examples/ioat/verify/verify.o 00:13:28.923 LINK hello_world 00:13:28.923 LINK ioat_perf 00:13:28.923 CC test/nvme/startup/startup.o 00:13:28.923 LINK spdk_lspci 00:13:28.923 CXX test/cpp_headers/env_dpdk.o 00:13:28.923 CC test/nvme/reserve/reserve.o 00:13:28.923 LINK blobcli 00:13:28.923 CXX test/cpp_headers/env.o 00:13:28.923 LINK startup 00:13:28.923 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:13:29.183 LINK verify 00:13:29.183 CC examples/nvme/reconnect/reconnect.o 00:13:29.183 CC app/spdk_nvme_perf/perf.o 00:13:29.183 CC app/spdk_nvme_identify/identify.o 00:13:29.183 CXX test/cpp_headers/event.o 00:13:29.183 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:13:29.183 LINK reserve 00:13:29.183 CXX test/cpp_headers/fd_group.o 00:13:29.183 CC test/nvme/simple_copy/simple_copy.o 00:13:29.183 CC test/nvme/connect_stress/connect_stress.o 00:13:29.445 CXX test/cpp_headers/fd.o 00:13:29.445 CC test/nvme/boot_partition/boot_partition.o 00:13:29.445 LINK reconnect 00:13:29.445 CC app/spdk_nvme_discover/discovery_aer.o 00:13:29.445 CXX test/cpp_headers/file.o 00:13:29.445 LINK connect_stress 00:13:29.445 LINK simple_copy 00:13:29.445 LINK vhost_fuzz 00:13:29.445 LINK boot_partition 00:13:29.713 LINK spdk_nvme_discover 00:13:29.713 CXX test/cpp_headers/ftl.o 00:13:29.713 CXX test/cpp_headers/gpt_spec.o 00:13:29.713 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:29.713 CC app/spdk_top/spdk_top.o 00:13:29.713 CC test/app/stub/stub.o 00:13:29.713 CC test/nvme/compliance/nvme_compliance.o 00:13:29.713 LINK spdk_nvme_perf 00:13:29.713 CXX test/cpp_headers/hexlify.o 00:13:29.972 LINK spdk_nvme_identify 00:13:29.972 CC app/vhost/vhost.o 00:13:29.972 CC app/spdk_dd/spdk_dd.o 00:13:29.972 CXX test/cpp_headers/histogram_data.o 00:13:29.972 LINK stub 00:13:29.972 CXX test/cpp_headers/idxd.o 00:13:29.972 LINK vhost 00:13:29.972 LINK nvme_compliance 00:13:30.229 LINK nvme_manage 00:13:30.229 CXX test/cpp_headers/idxd_spec.o 00:13:30.229 CXX test/cpp_headers/init.o 00:13:30.229 CC app/fio/nvme/fio_plugin.o 00:13:30.229 CC app/fio/bdev/fio_plugin.o 00:13:30.229 LINK spdk_dd 00:13:30.229 CC examples/nvme/arbitration/arbitration.o 00:13:30.230 CXX test/cpp_headers/ioat.o 00:13:30.487 CC examples/nvme/hotplug/hotplug.o 00:13:30.487 CC test/nvme/fused_ordering/fused_ordering.o 00:13:30.487 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:30.487 LINK spdk_top 00:13:30.487 CXX test/cpp_headers/ioat_spec.o 00:13:30.487 LINK cmb_copy 00:13:30.487 LINK fused_ordering 00:13:30.487 LINK hotplug 00:13:30.487 CC examples/nvme/abort/abort.o 00:13:30.746 LINK spdk_nvme 00:13:30.746 CXX test/cpp_headers/iscsi_spec.o 00:13:30.746 LINK arbitration 00:13:30.746 LINK spdk_bdev 00:13:30.746 CXX test/cpp_headers/json.o 00:13:30.746 CXX test/cpp_headers/jsonrpc.o 00:13:30.746 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:30.746 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:31.003 CXX test/cpp_headers/keyring.o 00:13:31.003 CC test/nvme/fdp/fdp.o 00:13:31.003 CC test/nvme/cuse/cuse.o 00:13:31.003 CC examples/sock/hello_world/hello_sock.o 00:13:31.003 LINK pmr_persistence 00:13:31.003 LINK abort 00:13:31.003 CC examples/vmd/lsvmd/lsvmd.o 00:13:31.003 LINK doorbell_aers 00:13:31.003 CXX test/cpp_headers/keyring_module.o 00:13:31.003 LINK esnap 00:13:31.003 CC examples/nvmf/nvmf/nvmf.o 00:13:31.003 CXX test/cpp_headers/likely.o 00:13:31.003 LINK lsvmd 00:13:31.003 CXX test/cpp_headers/log.o 00:13:31.261 LINK hello_sock 00:13:31.261 LINK fdp 00:13:31.261 CXX test/cpp_headers/lvol.o 00:13:31.261 CC examples/vmd/led/led.o 00:13:31.261 CXX test/cpp_headers/memory.o 00:13:31.261 CC examples/util/zipf/zipf.o 00:13:31.261 CXX test/cpp_headers/mmio.o 00:13:31.261 LINK nvmf 00:13:31.519 LINK led 00:13:31.519 CXX test/cpp_headers/nbd.o 00:13:31.519 CC examples/idxd/perf/perf.o 00:13:31.519 CC examples/thread/thread/thread_ex.o 00:13:31.519 LINK zipf 00:13:31.519 CXX test/cpp_headers/notify.o 00:13:31.519 CXX test/cpp_headers/nvme.o 00:13:31.519 CC examples/interrupt_tgt/interrupt_tgt.o 00:13:31.519 CXX test/cpp_headers/nvme_intel.o 00:13:31.519 CXX test/cpp_headers/nvme_ocssd.o 00:13:31.779 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:31.779 CXX test/cpp_headers/nvme_spec.o 00:13:31.779 CXX test/cpp_headers/nvme_zns.o 00:13:31.779 CXX test/cpp_headers/nvmf_cmd.o 00:13:31.779 LINK thread 00:13:31.779 LINK idxd_perf 00:13:31.779 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:31.779 LINK interrupt_tgt 00:13:31.779 CXX test/cpp_headers/nvmf.o 00:13:31.779 CXX test/cpp_headers/nvmf_spec.o 00:13:31.779 LINK cuse 00:13:32.038 CXX test/cpp_headers/nvmf_transport.o 00:13:32.038 CXX test/cpp_headers/opal.o 00:13:32.038 CXX test/cpp_headers/opal_spec.o 00:13:32.038 CXX test/cpp_headers/pci_ids.o 00:13:32.038 CXX test/cpp_headers/pipe.o 00:13:32.038 CXX test/cpp_headers/queue.o 00:13:32.038 CXX test/cpp_headers/reduce.o 00:13:32.038 CXX test/cpp_headers/rpc.o 00:13:32.038 CXX test/cpp_headers/scheduler.o 00:13:32.038 CXX test/cpp_headers/scsi.o 00:13:32.038 CXX test/cpp_headers/scsi_spec.o 00:13:32.038 CXX test/cpp_headers/stdinc.o 00:13:32.038 CXX test/cpp_headers/sock.o 00:13:32.038 CXX test/cpp_headers/string.o 00:13:32.038 CXX test/cpp_headers/thread.o 00:13:32.038 CXX test/cpp_headers/trace.o 00:13:32.296 CXX test/cpp_headers/trace_parser.o 00:13:32.296 CXX test/cpp_headers/tree.o 00:13:32.296 CXX test/cpp_headers/ublk.o 00:13:32.296 CXX test/cpp_headers/util.o 00:13:32.296 CXX test/cpp_headers/uuid.o 00:13:32.296 CXX test/cpp_headers/version.o 00:13:32.296 CXX test/cpp_headers/vfio_user_pci.o 00:13:32.296 CXX test/cpp_headers/vfio_user_spec.o 00:13:32.296 CXX test/cpp_headers/vhost.o 00:13:32.296 CXX test/cpp_headers/vmd.o 00:13:32.296 CXX test/cpp_headers/xor.o 00:13:32.296 CXX test/cpp_headers/zipf.o 00:13:38.858 00:13:38.858 real 0m57.816s 00:13:38.858 user 5m1.231s 00:13:38.858 sys 1m8.578s 00:13:38.858 21:16:27 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:13:38.858 ************************************ 00:13:38.858 END TEST make 00:13:38.858 ************************************ 00:13:38.858 21:16:27 -- common/autotest_common.sh@10 -- $ set +x 00:13:38.858 21:16:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:13:38.858 21:16:27 -- pm/common@30 -- $ signal_monitor_resources TERM 00:13:38.858 21:16:27 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:13:38.858 21:16:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:38.858 21:16:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:13:38.858 21:16:27 -- pm/common@45 -- $ pid=6098 00:13:38.858 21:16:27 -- pm/common@52 -- $ sudo kill -TERM 6098 00:13:38.858 21:16:27 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:38.859 21:16:27 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:13:38.859 21:16:27 -- pm/common@45 -- $ pid=6095 00:13:38.859 21:16:27 -- pm/common@52 -- $ sudo kill -TERM 6095 00:13:38.859 21:16:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:38.859 21:16:27 -- nvmf/common.sh@7 -- # uname -s 00:13:38.859 21:16:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.859 21:16:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.859 21:16:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.859 21:16:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.859 21:16:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.859 21:16:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.859 21:16:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.859 21:16:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.859 21:16:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.859 21:16:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.859 21:16:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:13:38.859 21:16:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:13:38.859 21:16:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.859 21:16:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.859 21:16:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:38.859 21:16:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.859 21:16:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.859 21:16:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.859 21:16:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.859 21:16:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.859 21:16:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.859 21:16:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.859 21:16:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.859 21:16:27 -- paths/export.sh@5 -- # export PATH 00:13:38.859 21:16:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.859 21:16:27 -- nvmf/common.sh@47 -- # : 0 00:13:38.859 21:16:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.859 21:16:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.859 21:16:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.859 21:16:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.859 21:16:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.859 21:16:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.859 21:16:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.859 21:16:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.859 21:16:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:13:38.859 21:16:27 -- spdk/autotest.sh@32 -- # uname -s 00:13:38.859 21:16:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:13:38.859 21:16:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:13:38.859 21:16:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:38.859 21:16:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:13:38.859 21:16:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:13:38.859 21:16:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:13:38.859 21:16:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:13:38.859 21:16:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:13:38.859 21:16:27 -- spdk/autotest.sh@48 -- # udevadm_pid=67039 00:13:38.859 21:16:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:13:38.859 21:16:27 -- pm/common@17 -- # local monitor 00:13:38.859 21:16:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:38.859 21:16:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:13:38.859 21:16:27 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=67043 00:13:38.859 21:16:27 -- pm/common@21 -- # date +%s 00:13:38.859 21:16:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:13:38.859 21:16:27 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=67050 00:13:38.859 21:16:27 -- pm/common@26 -- # sleep 1 00:13:38.859 21:16:27 -- pm/common@21 -- # date +%s 00:13:38.859 21:16:27 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714166187 00:13:38.859 21:16:27 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714166187 00:13:38.859 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714166187_collect-vmstat.pm.log 00:13:38.859 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714166187_collect-cpu-load.pm.log 00:13:39.795 21:16:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:13:39.795 21:16:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:13:39.795 21:16:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:39.795 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:13:39.795 21:16:28 -- spdk/autotest.sh@59 -- # create_test_list 00:13:39.795 21:16:28 -- common/autotest_common.sh@734 -- # xtrace_disable 00:13:39.795 21:16:28 -- common/autotest_common.sh@10 -- # set +x 00:13:39.795 21:16:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:13:39.795 21:16:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:13:39.795 21:16:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:13:39.795 21:16:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:13:39.795 21:16:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:13:39.795 21:16:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:13:39.795 21:16:28 -- common/autotest_common.sh@1441 -- # uname 00:13:39.795 21:16:28 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:13:39.795 21:16:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:13:39.795 21:16:28 -- common/autotest_common.sh@1461 -- # uname 00:13:39.795 21:16:28 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:13:39.795 21:16:28 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:13:39.795 21:16:28 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:13:39.795 21:16:28 -- spdk/autotest.sh@72 -- # hash lcov 00:13:39.795 21:16:28 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:13:39.795 21:16:28 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:13:39.795 --rc lcov_branch_coverage=1 00:13:39.795 --rc lcov_function_coverage=1 00:13:39.795 --rc genhtml_branch_coverage=1 00:13:39.795 --rc genhtml_function_coverage=1 00:13:39.795 --rc genhtml_legend=1 00:13:39.795 --rc geninfo_all_blocks=1 00:13:39.795 ' 00:13:39.795 21:16:28 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:13:39.795 --rc lcov_branch_coverage=1 00:13:39.795 --rc lcov_function_coverage=1 00:13:39.795 --rc genhtml_branch_coverage=1 00:13:39.795 --rc genhtml_function_coverage=1 00:13:39.795 --rc genhtml_legend=1 00:13:39.795 --rc geninfo_all_blocks=1 00:13:39.795 ' 00:13:39.795 21:16:28 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:13:39.795 --rc lcov_branch_coverage=1 00:13:39.795 --rc lcov_function_coverage=1 00:13:39.795 --rc genhtml_branch_coverage=1 00:13:39.795 --rc genhtml_function_coverage=1 00:13:39.795 --rc genhtml_legend=1 00:13:39.795 --rc geninfo_all_blocks=1 00:13:39.795 --no-external' 00:13:39.795 21:16:28 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:13:39.795 --rc lcov_branch_coverage=1 00:13:39.795 --rc lcov_function_coverage=1 00:13:39.795 --rc genhtml_branch_coverage=1 00:13:39.795 --rc genhtml_function_coverage=1 00:13:39.795 --rc genhtml_legend=1 00:13:39.795 --rc geninfo_all_blocks=1 00:13:39.795 --no-external' 00:13:39.795 21:16:28 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:13:39.795 lcov: LCOV version 1.14 00:13:40.054 21:16:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:13:48.194 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:13:48.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:13:48.194 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:13:48.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:13:48.194 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:13:48.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:13:54.772 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:13:54.772 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:14:06.990 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:14:06.990 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:14:06.991 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:14:06.991 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:14:06.992 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:14:06.992 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:14:10.279 21:16:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:14:10.279 21:16:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:10.279 21:16:59 -- common/autotest_common.sh@10 -- # set +x 00:14:10.279 21:16:59 -- spdk/autotest.sh@91 -- # rm -f 00:14:10.279 21:16:59 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:11.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:11.214 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:11.214 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:11.214 21:17:00 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:14:11.214 21:17:00 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:11.214 21:17:00 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:11.214 21:17:00 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:11.214 21:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:11.214 21:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:11.214 21:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:11.214 21:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:11.214 21:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:11.214 21:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:11.214 21:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:11.214 21:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:11.214 21:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:11.214 21:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:11.214 21:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:11.214 21:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:14:11.214 21:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:14:11.214 21:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:11.214 21:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:11.214 21:17:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:11.214 21:17:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:14:11.214 21:17:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:14:11.214 21:17:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:11.214 21:17:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:11.214 21:17:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:14:11.214 21:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:11.214 21:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:11.214 21:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:14:11.214 21:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:14:11.214 21:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:11.214 No valid GPT data, bailing 00:14:11.214 21:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:11.214 21:17:00 -- scripts/common.sh@391 -- # pt= 00:14:11.214 21:17:00 -- scripts/common.sh@392 -- # return 1 00:14:11.214 21:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:14:11.214 1+0 records in 00:14:11.214 1+0 records out 00:14:11.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00615749 s, 170 MB/s 00:14:11.214 21:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:11.214 21:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:11.214 21:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:14:11.214 21:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:14:11.214 21:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:14:11.473 No valid GPT data, bailing 00:14:11.473 21:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:11.473 21:17:00 -- scripts/common.sh@391 -- # pt= 00:14:11.473 21:17:00 -- scripts/common.sh@392 -- # return 1 00:14:11.473 21:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:14:11.473 1+0 records in 00:14:11.473 1+0 records out 00:14:11.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00734344 s, 143 MB/s 00:14:11.473 21:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:11.473 21:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:11.473 21:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:14:11.473 21:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:14:11.473 21:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:14:11.473 No valid GPT data, bailing 00:14:11.473 21:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:14:11.473 21:17:00 -- scripts/common.sh@391 -- # pt= 00:14:11.473 21:17:00 -- scripts/common.sh@392 -- # return 1 00:14:11.473 21:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:14:11.473 1+0 records in 00:14:11.473 1+0 records out 00:14:11.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00725419 s, 145 MB/s 00:14:11.473 21:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:11.473 21:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:11.473 21:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:14:11.473 21:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:14:11.473 21:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:14:11.473 No valid GPT data, bailing 00:14:11.473 21:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:14:11.473 21:17:00 -- scripts/common.sh@391 -- # pt= 00:14:11.473 21:17:00 -- scripts/common.sh@392 -- # return 1 00:14:11.473 21:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:14:11.473 1+0 records in 00:14:11.473 1+0 records out 00:14:11.473 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590531 s, 178 MB/s 00:14:11.473 21:17:00 -- spdk/autotest.sh@118 -- # sync 00:14:11.473 21:17:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:14:11.473 21:17:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:14:11.473 21:17:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:14.762 21:17:03 -- spdk/autotest.sh@124 -- # uname -s 00:14:14.762 21:17:03 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:14:14.762 21:17:03 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:14.762 21:17:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:14.762 21:17:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.762 21:17:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.762 ************************************ 00:14:14.762 START TEST setup.sh 00:14:14.762 ************************************ 00:14:14.762 21:17:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:14.762 * Looking for test storage... 00:14:14.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:14.762 21:17:03 -- setup/test-setup.sh@10 -- # uname -s 00:14:14.762 21:17:03 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:14:14.762 21:17:03 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:14.762 21:17:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:14.762 21:17:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.762 21:17:03 -- common/autotest_common.sh@10 -- # set +x 00:14:14.762 ************************************ 00:14:14.762 START TEST acl 00:14:14.762 ************************************ 00:14:14.762 21:17:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:14.762 * Looking for test storage... 00:14:14.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:14.762 21:17:03 -- setup/acl.sh@10 -- # get_zoned_devs 00:14:14.762 21:17:03 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:14.762 21:17:03 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:14.762 21:17:03 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:14.762 21:17:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 21:17:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:14.762 21:17:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:14.762 21:17:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:14.762 21:17:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 21:17:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 21:17:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:14.762 21:17:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:14.762 21:17:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:14.762 21:17:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 21:17:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 21:17:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:14:14.762 21:17:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:14:14.762 21:17:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:14.762 21:17:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 21:17:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:14.762 21:17:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:14:14.762 21:17:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:14:14.762 21:17:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:14.762 21:17:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:14.762 21:17:03 -- setup/acl.sh@12 -- # devs=() 00:14:14.762 21:17:03 -- setup/acl.sh@12 -- # declare -a devs 00:14:14.762 21:17:03 -- setup/acl.sh@13 -- # drivers=() 00:14:14.762 21:17:03 -- setup/acl.sh@13 -- # declare -A drivers 00:14:14.762 21:17:03 -- setup/acl.sh@51 -- # setup reset 00:14:14.762 21:17:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:14.762 21:17:03 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:15.699 21:17:04 -- setup/acl.sh@52 -- # collect_setup_devs 00:14:15.699 21:17:04 -- setup/acl.sh@16 -- # local dev driver 00:14:15.699 21:17:04 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:15.699 21:17:04 -- setup/acl.sh@15 -- # setup output status 00:14:15.699 21:17:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:15.699 21:17:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:16.266 21:17:05 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:14:16.266 21:17:05 -- setup/acl.sh@19 -- # continue 00:14:16.266 21:17:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:16.266 Hugepages 00:14:16.266 node hugesize free / total 00:14:16.266 21:17:05 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:14:16.266 21:17:05 -- setup/acl.sh@19 -- # continue 00:14:16.266 21:17:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:16.266 00:14:16.267 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:16.267 21:17:05 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:14:16.267 21:17:05 -- setup/acl.sh@19 -- # continue 00:14:16.267 21:17:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:16.530 21:17:05 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:14:16.530 21:17:05 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:14:16.530 21:17:05 -- setup/acl.sh@20 -- # continue 00:14:16.530 21:17:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:16.530 21:17:05 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:14:16.530 21:17:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:16.530 21:17:05 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:16.530 21:17:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:16.530 21:17:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:16.530 21:17:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:16.530 21:17:05 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:14:16.530 21:17:05 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:16.530 21:17:05 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:16.530 21:17:05 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:16.530 21:17:05 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:16.530 21:17:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:16.530 21:17:05 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:14:16.530 21:17:05 -- setup/acl.sh@54 -- # run_test denied denied 00:14:16.530 21:17:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:16.788 21:17:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:16.788 21:17:05 -- common/autotest_common.sh@10 -- # set +x 00:14:16.788 ************************************ 00:14:16.788 START TEST denied 00:14:16.788 ************************************ 00:14:16.788 21:17:05 -- common/autotest_common.sh@1111 -- # denied 00:14:16.788 21:17:05 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:14:16.788 21:17:05 -- setup/acl.sh@38 -- # setup output config 00:14:16.788 21:17:05 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:14:16.788 21:17:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:16.788 21:17:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:17.725 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:14:17.725 21:17:06 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:14:17.725 21:17:06 -- setup/acl.sh@28 -- # local dev driver 00:14:17.725 21:17:06 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:17.725 21:17:06 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:14:17.725 21:17:06 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:14:17.725 21:17:06 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:17.725 21:17:06 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:17.725 21:17:06 -- setup/acl.sh@41 -- # setup reset 00:14:17.725 21:17:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:17.725 21:17:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:18.663 00:14:18.664 real 0m1.843s 00:14:18.664 user 0m0.638s 00:14:18.664 sys 0m1.152s 00:14:18.664 21:17:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:18.664 21:17:07 -- common/autotest_common.sh@10 -- # set +x 00:14:18.664 ************************************ 00:14:18.664 END TEST denied 00:14:18.664 ************************************ 00:14:18.664 21:17:07 -- setup/acl.sh@55 -- # run_test allowed allowed 00:14:18.664 21:17:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:18.664 21:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:18.664 21:17:07 -- common/autotest_common.sh@10 -- # set +x 00:14:18.664 ************************************ 00:14:18.664 START TEST allowed 00:14:18.664 ************************************ 00:14:18.664 21:17:07 -- common/autotest_common.sh@1111 -- # allowed 00:14:18.664 21:17:07 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:14:18.664 21:17:07 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:14:18.664 21:17:07 -- setup/acl.sh@45 -- # setup output config 00:14:18.664 21:17:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:18.664 21:17:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:19.601 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:19.601 21:17:08 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:14:19.601 21:17:08 -- setup/acl.sh@28 -- # local dev driver 00:14:19.601 21:17:08 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:19.601 21:17:08 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:14:19.601 21:17:08 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:14:19.601 21:17:08 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:19.601 21:17:08 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:19.601 21:17:08 -- setup/acl.sh@48 -- # setup reset 00:14:19.601 21:17:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:19.601 21:17:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:20.544 00:14:20.544 real 0m1.786s 00:14:20.544 user 0m0.685s 00:14:20.544 sys 0m1.110s 00:14:20.544 21:17:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:20.544 ************************************ 00:14:20.544 END TEST allowed 00:14:20.544 ************************************ 00:14:20.544 21:17:09 -- common/autotest_common.sh@10 -- # set +x 00:14:20.544 00:14:20.544 real 0m6.035s 00:14:20.544 user 0m2.320s 00:14:20.544 sys 0m3.687s 00:14:20.544 21:17:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:20.544 21:17:09 -- common/autotest_common.sh@10 -- # set +x 00:14:20.544 ************************************ 00:14:20.544 END TEST acl 00:14:20.544 ************************************ 00:14:20.544 21:17:09 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:20.544 21:17:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:20.544 21:17:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:20.545 21:17:09 -- common/autotest_common.sh@10 -- # set +x 00:14:20.808 ************************************ 00:14:20.808 START TEST hugepages 00:14:20.808 ************************************ 00:14:20.808 21:17:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:20.808 * Looking for test storage... 00:14:20.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:20.808 21:17:09 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:14:20.808 21:17:09 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:14:20.808 21:17:09 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:14:20.808 21:17:09 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:14:20.808 21:17:09 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:14:20.808 21:17:09 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:14:20.808 21:17:09 -- setup/common.sh@17 -- # local get=Hugepagesize 00:14:20.808 21:17:09 -- setup/common.sh@18 -- # local node= 00:14:20.808 21:17:09 -- setup/common.sh@19 -- # local var val 00:14:20.808 21:17:09 -- setup/common.sh@20 -- # local mem_f mem 00:14:20.808 21:17:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:20.808 21:17:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:20.808 21:17:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:20.808 21:17:09 -- setup/common.sh@28 -- # mapfile -t mem 00:14:20.808 21:17:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4000700 kB' 'MemAvailable: 7378980 kB' 'Buffers: 2436 kB' 'Cached: 3577344 kB' 'SwapCached: 0 kB' 'Active: 877596 kB' 'Inactive: 2810028 kB' 'Active(anon): 118336 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 109472 kB' 'Mapped: 49112 kB' 'Shmem: 10492 kB' 'KReclaimable: 91864 kB' 'Slab: 176388 kB' 'SReclaimable: 91864 kB' 'SUnreclaim: 84524 kB' 'KernelStack: 6536 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 345360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.808 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.808 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # continue 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # IFS=': ' 00:14:20.809 21:17:09 -- setup/common.sh@31 -- # read -r var val _ 00:14:20.809 21:17:09 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:20.809 21:17:09 -- setup/common.sh@33 -- # echo 2048 00:14:20.809 21:17:09 -- setup/common.sh@33 -- # return 0 00:14:20.809 21:17:09 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:14:20.809 21:17:09 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:14:20.809 21:17:09 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:14:20.810 21:17:09 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:14:20.810 21:17:09 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:14:20.810 21:17:09 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:14:20.810 21:17:09 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:14:20.810 21:17:09 -- setup/hugepages.sh@207 -- # get_nodes 00:14:20.810 21:17:09 -- setup/hugepages.sh@27 -- # local node 00:14:20.810 21:17:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:20.810 21:17:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:14:20.810 21:17:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:20.810 21:17:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:20.810 21:17:09 -- setup/hugepages.sh@208 -- # clear_hp 00:14:20.810 21:17:09 -- setup/hugepages.sh@37 -- # local node hp 00:14:20.810 21:17:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:20.810 21:17:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:20.810 21:17:09 -- setup/hugepages.sh@41 -- # echo 0 00:14:20.810 21:17:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:20.810 21:17:09 -- setup/hugepages.sh@41 -- # echo 0 00:14:20.810 21:17:09 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:20.810 21:17:09 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:20.810 21:17:09 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:14:20.810 21:17:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:20.810 21:17:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:20.810 21:17:09 -- common/autotest_common.sh@10 -- # set +x 00:14:21.075 ************************************ 00:14:21.075 START TEST default_setup 00:14:21.075 ************************************ 00:14:21.075 21:17:10 -- common/autotest_common.sh@1111 -- # default_setup 00:14:21.075 21:17:10 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:14:21.075 21:17:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:21.075 21:17:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:21.075 21:17:10 -- setup/hugepages.sh@51 -- # shift 00:14:21.075 21:17:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:21.075 21:17:10 -- setup/hugepages.sh@52 -- # local node_ids 00:14:21.075 21:17:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:21.075 21:17:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:21.075 21:17:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:21.075 21:17:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:21.075 21:17:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:21.075 21:17:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:21.075 21:17:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:21.075 21:17:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:21.075 21:17:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:21.075 21:17:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:21.075 21:17:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:21.075 21:17:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:21.075 21:17:10 -- setup/hugepages.sh@73 -- # return 0 00:14:21.075 21:17:10 -- setup/hugepages.sh@137 -- # setup output 00:14:21.075 21:17:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:21.075 21:17:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:21.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:21.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:21.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:21.930 21:17:11 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:14:21.930 21:17:11 -- setup/hugepages.sh@89 -- # local node 00:14:21.930 21:17:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:21.930 21:17:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:21.930 21:17:11 -- setup/hugepages.sh@92 -- # local surp 00:14:21.930 21:17:11 -- setup/hugepages.sh@93 -- # local resv 00:14:21.930 21:17:11 -- setup/hugepages.sh@94 -- # local anon 00:14:21.930 21:17:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:21.930 21:17:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:21.930 21:17:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:21.930 21:17:11 -- setup/common.sh@18 -- # local node= 00:14:21.930 21:17:11 -- setup/common.sh@19 -- # local var val 00:14:21.930 21:17:11 -- setup/common.sh@20 -- # local mem_f mem 00:14:21.930 21:17:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:21.930 21:17:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:21.930 21:17:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:21.930 21:17:11 -- setup/common.sh@28 -- # mapfile -t mem 00:14:21.930 21:17:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:21.930 21:17:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6100432 kB' 'MemAvailable: 9478588 kB' 'Buffers: 2436 kB' 'Cached: 3577340 kB' 'SwapCached: 0 kB' 'Active: 893708 kB' 'Inactive: 2810040 kB' 'Active(anon): 134448 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 916 kB' 'Writeback: 0 kB' 'AnonPages: 125576 kB' 'Mapped: 49400 kB' 'Shmem: 10468 kB' 'KReclaimable: 91592 kB' 'Slab: 176080 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84488 kB' 'KernelStack: 6528 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.930 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.930 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:21.931 21:17:11 -- setup/common.sh@33 -- # echo 0 00:14:21.931 21:17:11 -- setup/common.sh@33 -- # return 0 00:14:21.931 21:17:11 -- setup/hugepages.sh@97 -- # anon=0 00:14:21.931 21:17:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:21.931 21:17:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:21.931 21:17:11 -- setup/common.sh@18 -- # local node= 00:14:21.931 21:17:11 -- setup/common.sh@19 -- # local var val 00:14:21.931 21:17:11 -- setup/common.sh@20 -- # local mem_f mem 00:14:21.931 21:17:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:21.931 21:17:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:21.931 21:17:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:21.931 21:17:11 -- setup/common.sh@28 -- # mapfile -t mem 00:14:21.931 21:17:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6100948 kB' 'MemAvailable: 9479112 kB' 'Buffers: 2436 kB' 'Cached: 3577340 kB' 'SwapCached: 0 kB' 'Active: 893288 kB' 'Inactive: 2810048 kB' 'Active(anon): 134028 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 916 kB' 'Writeback: 0 kB' 'AnonPages: 125168 kB' 'Mapped: 49072 kB' 'Shmem: 10468 kB' 'KReclaimable: 91592 kB' 'Slab: 176068 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84476 kB' 'KernelStack: 6496 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.931 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.931 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.932 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.932 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:21.932 21:17:11 -- setup/common.sh@33 -- # echo 0 00:14:21.932 21:17:11 -- setup/common.sh@33 -- # return 0 00:14:21.932 21:17:11 -- setup/hugepages.sh@99 -- # surp=0 00:14:21.932 21:17:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:21.932 21:17:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:21.932 21:17:11 -- setup/common.sh@18 -- # local node= 00:14:21.933 21:17:11 -- setup/common.sh@19 -- # local var val 00:14:21.933 21:17:11 -- setup/common.sh@20 -- # local mem_f mem 00:14:21.933 21:17:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:21.933 21:17:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:21.933 21:17:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:21.933 21:17:11 -- setup/common.sh@28 -- # mapfile -t mem 00:14:21.933 21:17:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:21.933 21:17:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6101416 kB' 'MemAvailable: 9479580 kB' 'Buffers: 2436 kB' 'Cached: 3577340 kB' 'SwapCached: 0 kB' 'Active: 893312 kB' 'Inactive: 2810048 kB' 'Active(anon): 134052 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 916 kB' 'Writeback: 0 kB' 'AnonPages: 125172 kB' 'Mapped: 49072 kB' 'Shmem: 10468 kB' 'KReclaimable: 91592 kB' 'Slab: 176084 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84492 kB' 'KernelStack: 6528 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.933 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.933 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # continue 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:21.934 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:21.934 21:17:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.195 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.195 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.195 21:17:11 -- setup/common.sh@33 -- # echo 0 00:14:22.195 21:17:11 -- setup/common.sh@33 -- # return 0 00:14:22.195 21:17:11 -- setup/hugepages.sh@100 -- # resv=0 00:14:22.195 21:17:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:22.195 nr_hugepages=1024 00:14:22.195 21:17:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:22.195 resv_hugepages=0 00:14:22.195 21:17:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:22.195 surplus_hugepages=0 00:14:22.195 21:17:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:22.195 anon_hugepages=0 00:14:22.195 21:17:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:22.195 21:17:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:22.195 21:17:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:22.195 21:17:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:22.195 21:17:11 -- setup/common.sh@18 -- # local node= 00:14:22.195 21:17:11 -- setup/common.sh@19 -- # local var val 00:14:22.195 21:17:11 -- setup/common.sh@20 -- # local mem_f mem 00:14:22.195 21:17:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:22.195 21:17:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:22.195 21:17:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:22.195 21:17:11 -- setup/common.sh@28 -- # mapfile -t mem 00:14:22.195 21:17:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:22.196 21:17:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6101416 kB' 'MemAvailable: 9479580 kB' 'Buffers: 2436 kB' 'Cached: 3577340 kB' 'SwapCached: 0 kB' 'Active: 893312 kB' 'Inactive: 2810048 kB' 'Active(anon): 134052 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 916 kB' 'Writeback: 0 kB' 'AnonPages: 125172 kB' 'Mapped: 49072 kB' 'Shmem: 10468 kB' 'KReclaimable: 91592 kB' 'Slab: 176084 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84492 kB' 'KernelStack: 6528 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.196 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.196 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:22.197 21:17:11 -- setup/common.sh@33 -- # echo 1024 00:14:22.197 21:17:11 -- setup/common.sh@33 -- # return 0 00:14:22.197 21:17:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:22.197 21:17:11 -- setup/hugepages.sh@112 -- # get_nodes 00:14:22.197 21:17:11 -- setup/hugepages.sh@27 -- # local node 00:14:22.197 21:17:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:22.197 21:17:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:22.197 21:17:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:22.197 21:17:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:22.197 21:17:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:22.197 21:17:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:22.197 21:17:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:22.197 21:17:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:22.197 21:17:11 -- setup/common.sh@18 -- # local node=0 00:14:22.197 21:17:11 -- setup/common.sh@19 -- # local var val 00:14:22.197 21:17:11 -- setup/common.sh@20 -- # local mem_f mem 00:14:22.197 21:17:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:22.197 21:17:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:22.197 21:17:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:22.197 21:17:11 -- setup/common.sh@28 -- # mapfile -t mem 00:14:22.197 21:17:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6101544 kB' 'MemUsed: 6140432 kB' 'SwapCached: 0 kB' 'Active: 893308 kB' 'Inactive: 2810048 kB' 'Active(anon): 134048 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 916 kB' 'Writeback: 0 kB' 'FilePages: 3579776 kB' 'Mapped: 49072 kB' 'AnonPages: 125172 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91592 kB' 'Slab: 176084 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.197 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.197 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.198 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.198 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.198 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.198 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.198 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.198 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.198 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.198 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.198 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.198 21:17:11 -- setup/common.sh@33 -- # echo 0 00:14:22.198 21:17:11 -- setup/common.sh@33 -- # return 0 00:14:22.198 21:17:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:22.198 21:17:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:22.198 21:17:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:22.198 21:17:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:22.198 node0=1024 expecting 1024 00:14:22.198 21:17:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:22.198 21:17:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:22.198 00:14:22.198 real 0m1.180s 00:14:22.198 user 0m0.501s 00:14:22.198 sys 0m0.635s 00:14:22.198 21:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:22.198 21:17:11 -- common/autotest_common.sh@10 -- # set +x 00:14:22.198 ************************************ 00:14:22.198 END TEST default_setup 00:14:22.198 ************************************ 00:14:22.198 21:17:11 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:14:22.198 21:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:22.198 21:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.198 21:17:11 -- common/autotest_common.sh@10 -- # set +x 00:14:22.198 ************************************ 00:14:22.198 START TEST per_node_1G_alloc 00:14:22.198 ************************************ 00:14:22.198 21:17:11 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:14:22.198 21:17:11 -- setup/hugepages.sh@143 -- # local IFS=, 00:14:22.198 21:17:11 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:14:22.198 21:17:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:14:22.198 21:17:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:22.198 21:17:11 -- setup/hugepages.sh@51 -- # shift 00:14:22.198 21:17:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:22.198 21:17:11 -- setup/hugepages.sh@52 -- # local node_ids 00:14:22.198 21:17:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:22.198 21:17:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:22.198 21:17:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:22.198 21:17:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:22.198 21:17:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:22.198 21:17:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:22.198 21:17:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:22.198 21:17:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:22.198 21:17:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:22.198 21:17:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:22.198 21:17:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:22.198 21:17:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:14:22.198 21:17:11 -- setup/hugepages.sh@73 -- # return 0 00:14:22.198 21:17:11 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:14:22.198 21:17:11 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:14:22.198 21:17:11 -- setup/hugepages.sh@146 -- # setup output 00:14:22.198 21:17:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:22.198 21:17:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:22.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:22.768 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.768 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:22.769 21:17:11 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:14:22.769 21:17:11 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:14:22.769 21:17:11 -- setup/hugepages.sh@89 -- # local node 00:14:22.769 21:17:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:22.769 21:17:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:22.769 21:17:11 -- setup/hugepages.sh@92 -- # local surp 00:14:22.769 21:17:11 -- setup/hugepages.sh@93 -- # local resv 00:14:22.769 21:17:11 -- setup/hugepages.sh@94 -- # local anon 00:14:22.769 21:17:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:22.769 21:17:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:22.769 21:17:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:22.769 21:17:11 -- setup/common.sh@18 -- # local node= 00:14:22.769 21:17:11 -- setup/common.sh@19 -- # local var val 00:14:22.769 21:17:11 -- setup/common.sh@20 -- # local mem_f mem 00:14:22.769 21:17:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:22.769 21:17:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:22.769 21:17:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:22.769 21:17:11 -- setup/common.sh@28 -- # mapfile -t mem 00:14:22.769 21:17:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7160220 kB' 'MemAvailable: 10538392 kB' 'Buffers: 2436 kB' 'Cached: 3577348 kB' 'SwapCached: 0 kB' 'Active: 893356 kB' 'Inactive: 2810056 kB' 'Active(anon): 134096 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1076 kB' 'Writeback: 0 kB' 'AnonPages: 125192 kB' 'Mapped: 49212 kB' 'Shmem: 10468 kB' 'KReclaimable: 91592 kB' 'Slab: 176192 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84600 kB' 'KernelStack: 6532 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 360176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.769 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.769 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:22.770 21:17:11 -- setup/common.sh@33 -- # echo 0 00:14:22.770 21:17:11 -- setup/common.sh@33 -- # return 0 00:14:22.770 21:17:11 -- setup/hugepages.sh@97 -- # anon=0 00:14:22.770 21:17:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:22.770 21:17:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:22.770 21:17:11 -- setup/common.sh@18 -- # local node= 00:14:22.770 21:17:11 -- setup/common.sh@19 -- # local var val 00:14:22.770 21:17:11 -- setup/common.sh@20 -- # local mem_f mem 00:14:22.770 21:17:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:22.770 21:17:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:22.770 21:17:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:22.770 21:17:11 -- setup/common.sh@28 -- # mapfile -t mem 00:14:22.770 21:17:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7160224 kB' 'MemAvailable: 10538396 kB' 'Buffers: 2436 kB' 'Cached: 3577348 kB' 'SwapCached: 0 kB' 'Active: 893272 kB' 'Inactive: 2810056 kB' 'Active(anon): 134012 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1076 kB' 'Writeback: 0 kB' 'AnonPages: 125236 kB' 'Mapped: 49084 kB' 'Shmem: 10468 kB' 'KReclaimable: 91592 kB' 'Slab: 176188 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84596 kB' 'KernelStack: 6560 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 360932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.770 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.770 21:17:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:22.771 21:17:11 -- setup/common.sh@33 -- # echo 0 00:14:22.771 21:17:11 -- setup/common.sh@33 -- # return 0 00:14:22.771 21:17:11 -- setup/hugepages.sh@99 -- # surp=0 00:14:22.771 21:17:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:22.771 21:17:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:22.771 21:17:11 -- setup/common.sh@18 -- # local node= 00:14:22.771 21:17:11 -- setup/common.sh@19 -- # local var val 00:14:22.771 21:17:11 -- setup/common.sh@20 -- # local mem_f mem 00:14:22.771 21:17:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:22.771 21:17:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:22.771 21:17:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:22.771 21:17:11 -- setup/common.sh@28 -- # mapfile -t mem 00:14:22.771 21:17:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:22.771 21:17:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7160224 kB' 'MemAvailable: 10538396 kB' 'Buffers: 2436 kB' 'Cached: 3577348 kB' 'SwapCached: 0 kB' 'Active: 893196 kB' 'Inactive: 2810056 kB' 'Active(anon): 133936 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1076 kB' 'Writeback: 0 kB' 'AnonPages: 125352 kB' 'Mapped: 49084 kB' 'Shmem: 10468 kB' 'KReclaimable: 91592 kB' 'Slab: 176188 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84596 kB' 'KernelStack: 6560 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 360176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.771 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.771 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:11 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:11 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:11 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:22.772 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:22.772 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.033 21:17:12 -- setup/common.sh@33 -- # echo 0 00:14:23.033 21:17:12 -- setup/common.sh@33 -- # return 0 00:14:23.033 21:17:12 -- setup/hugepages.sh@100 -- # resv=0 00:14:23.033 21:17:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:23.033 nr_hugepages=512 00:14:23.033 21:17:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:23.033 resv_hugepages=0 00:14:23.033 surplus_hugepages=0 00:14:23.033 anon_hugepages=0 00:14:23.033 21:17:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:23.033 21:17:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:23.033 21:17:12 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:23.033 21:17:12 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:23.033 21:17:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:23.033 21:17:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:23.033 21:17:12 -- setup/common.sh@18 -- # local node= 00:14:23.033 21:17:12 -- setup/common.sh@19 -- # local var val 00:14:23.033 21:17:12 -- setup/common.sh@20 -- # local mem_f mem 00:14:23.033 21:17:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:23.033 21:17:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:23.033 21:17:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:23.033 21:17:12 -- setup/common.sh@28 -- # mapfile -t mem 00:14:23.033 21:17:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7160224 kB' 'MemAvailable: 10538396 kB' 'Buffers: 2436 kB' 'Cached: 3577348 kB' 'SwapCached: 0 kB' 'Active: 893216 kB' 'Inactive: 2810056 kB' 'Active(anon): 133956 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1076 kB' 'Writeback: 0 kB' 'AnonPages: 125092 kB' 'Mapped: 49084 kB' 'Shmem: 10468 kB' 'KReclaimable: 91592 kB' 'Slab: 176188 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84596 kB' 'KernelStack: 6528 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 360176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.033 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.033 21:17:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.034 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.034 21:17:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.035 21:17:12 -- setup/common.sh@33 -- # echo 512 00:14:23.035 21:17:12 -- setup/common.sh@33 -- # return 0 00:14:23.035 21:17:12 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:23.035 21:17:12 -- setup/hugepages.sh@112 -- # get_nodes 00:14:23.035 21:17:12 -- setup/hugepages.sh@27 -- # local node 00:14:23.035 21:17:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:23.035 21:17:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:23.035 21:17:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:23.035 21:17:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:23.035 21:17:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:23.035 21:17:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:23.035 21:17:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:23.035 21:17:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:23.035 21:17:12 -- setup/common.sh@18 -- # local node=0 00:14:23.035 21:17:12 -- setup/common.sh@19 -- # local var val 00:14:23.035 21:17:12 -- setup/common.sh@20 -- # local mem_f mem 00:14:23.035 21:17:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:23.035 21:17:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:23.035 21:17:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:23.035 21:17:12 -- setup/common.sh@28 -- # mapfile -t mem 00:14:23.035 21:17:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7160516 kB' 'MemUsed: 5081460 kB' 'SwapCached: 0 kB' 'Active: 893216 kB' 'Inactive: 2810056 kB' 'Active(anon): 133956 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1076 kB' 'Writeback: 0 kB' 'FilePages: 3579784 kB' 'Mapped: 49084 kB' 'AnonPages: 125092 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91592 kB' 'Slab: 176188 kB' 'SReclaimable: 91592 kB' 'SUnreclaim: 84596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.035 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.035 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.036 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.036 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.036 21:17:12 -- setup/common.sh@33 -- # echo 0 00:14:23.036 21:17:12 -- setup/common.sh@33 -- # return 0 00:14:23.036 21:17:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:23.036 21:17:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:23.036 21:17:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:23.036 21:17:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:23.036 21:17:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:23.036 node0=512 expecting 512 00:14:23.036 21:17:12 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:23.036 00:14:23.036 real 0m0.706s 00:14:23.036 user 0m0.312s 00:14:23.036 sys 0m0.408s 00:14:23.036 21:17:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:23.036 21:17:12 -- common/autotest_common.sh@10 -- # set +x 00:14:23.036 ************************************ 00:14:23.036 END TEST per_node_1G_alloc 00:14:23.036 ************************************ 00:14:23.036 21:17:12 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:14:23.036 21:17:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:23.036 21:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:23.036 21:17:12 -- common/autotest_common.sh@10 -- # set +x 00:14:23.036 ************************************ 00:14:23.036 START TEST even_2G_alloc 00:14:23.036 ************************************ 00:14:23.036 21:17:12 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:14:23.036 21:17:12 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:14:23.036 21:17:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:23.036 21:17:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:23.036 21:17:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:23.036 21:17:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:23.036 21:17:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:23.036 21:17:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:23.036 21:17:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:23.036 21:17:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:23.036 21:17:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:23.036 21:17:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:23.036 21:17:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:23.036 21:17:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:23.036 21:17:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:23.036 21:17:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:23.036 21:17:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:14:23.036 21:17:12 -- setup/hugepages.sh@83 -- # : 0 00:14:23.036 21:17:12 -- setup/hugepages.sh@84 -- # : 0 00:14:23.036 21:17:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:23.036 21:17:12 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:14:23.036 21:17:12 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:14:23.036 21:17:12 -- setup/hugepages.sh@153 -- # setup output 00:14:23.036 21:17:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:23.036 21:17:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:23.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:23.608 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:23.608 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:23.608 21:17:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:14:23.608 21:17:12 -- setup/hugepages.sh@89 -- # local node 00:14:23.608 21:17:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:23.608 21:17:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:23.608 21:17:12 -- setup/hugepages.sh@92 -- # local surp 00:14:23.608 21:17:12 -- setup/hugepages.sh@93 -- # local resv 00:14:23.608 21:17:12 -- setup/hugepages.sh@94 -- # local anon 00:14:23.608 21:17:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:23.608 21:17:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:23.608 21:17:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:23.608 21:17:12 -- setup/common.sh@18 -- # local node= 00:14:23.608 21:17:12 -- setup/common.sh@19 -- # local var val 00:14:23.608 21:17:12 -- setup/common.sh@20 -- # local mem_f mem 00:14:23.608 21:17:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:23.608 21:17:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:23.608 21:17:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:23.608 21:17:12 -- setup/common.sh@28 -- # mapfile -t mem 00:14:23.608 21:17:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.608 21:17:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6115368 kB' 'MemAvailable: 9493552 kB' 'Buffers: 2436 kB' 'Cached: 3577352 kB' 'SwapCached: 0 kB' 'Active: 893884 kB' 'Inactive: 2810060 kB' 'Active(anon): 134624 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1244 kB' 'Writeback: 0 kB' 'AnonPages: 125712 kB' 'Mapped: 49216 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176184 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84576 kB' 'KernelStack: 6516 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.608 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.608 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.609 21:17:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:23.609 21:17:12 -- setup/common.sh@33 -- # echo 0 00:14:23.609 21:17:12 -- setup/common.sh@33 -- # return 0 00:14:23.609 21:17:12 -- setup/hugepages.sh@97 -- # anon=0 00:14:23.609 21:17:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:23.609 21:17:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:23.609 21:17:12 -- setup/common.sh@18 -- # local node= 00:14:23.609 21:17:12 -- setup/common.sh@19 -- # local var val 00:14:23.609 21:17:12 -- setup/common.sh@20 -- # local mem_f mem 00:14:23.609 21:17:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:23.609 21:17:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:23.609 21:17:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:23.609 21:17:12 -- setup/common.sh@28 -- # mapfile -t mem 00:14:23.609 21:17:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:23.609 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6115368 kB' 'MemAvailable: 9493552 kB' 'Buffers: 2436 kB' 'Cached: 3577352 kB' 'SwapCached: 0 kB' 'Active: 893636 kB' 'Inactive: 2810060 kB' 'Active(anon): 134376 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1244 kB' 'Writeback: 0 kB' 'AnonPages: 125484 kB' 'Mapped: 49216 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176184 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84576 kB' 'KernelStack: 6516 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.610 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.610 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.611 21:17:12 -- setup/common.sh@33 -- # echo 0 00:14:23.611 21:17:12 -- setup/common.sh@33 -- # return 0 00:14:23.611 21:17:12 -- setup/hugepages.sh@99 -- # surp=0 00:14:23.611 21:17:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:23.611 21:17:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:23.611 21:17:12 -- setup/common.sh@18 -- # local node= 00:14:23.611 21:17:12 -- setup/common.sh@19 -- # local var val 00:14:23.611 21:17:12 -- setup/common.sh@20 -- # local mem_f mem 00:14:23.611 21:17:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:23.611 21:17:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:23.611 21:17:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:23.611 21:17:12 -- setup/common.sh@28 -- # mapfile -t mem 00:14:23.611 21:17:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6115116 kB' 'MemAvailable: 9493300 kB' 'Buffers: 2436 kB' 'Cached: 3577352 kB' 'SwapCached: 0 kB' 'Active: 893288 kB' 'Inactive: 2810060 kB' 'Active(anon): 134028 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1244 kB' 'Writeback: 0 kB' 'AnonPages: 125128 kB' 'Mapped: 49096 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176184 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84576 kB' 'KernelStack: 6528 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.611 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.611 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.612 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:23.612 21:17:12 -- setup/common.sh@33 -- # echo 0 00:14:23.612 21:17:12 -- setup/common.sh@33 -- # return 0 00:14:23.612 21:17:12 -- setup/hugepages.sh@100 -- # resv=0 00:14:23.612 21:17:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:23.612 nr_hugepages=1024 00:14:23.612 21:17:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:23.612 resv_hugepages=0 00:14:23.612 21:17:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:23.612 surplus_hugepages=0 00:14:23.612 21:17:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:23.612 anon_hugepages=0 00:14:23.612 21:17:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:23.612 21:17:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:23.612 21:17:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:23.612 21:17:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:23.612 21:17:12 -- setup/common.sh@18 -- # local node= 00:14:23.612 21:17:12 -- setup/common.sh@19 -- # local var val 00:14:23.612 21:17:12 -- setup/common.sh@20 -- # local mem_f mem 00:14:23.612 21:17:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:23.612 21:17:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:23.612 21:17:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:23.612 21:17:12 -- setup/common.sh@28 -- # mapfile -t mem 00:14:23.612 21:17:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.612 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6115116 kB' 'MemAvailable: 9493300 kB' 'Buffers: 2436 kB' 'Cached: 3577352 kB' 'SwapCached: 0 kB' 'Active: 893284 kB' 'Inactive: 2810060 kB' 'Active(anon): 134024 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1244 kB' 'Writeback: 0 kB' 'AnonPages: 125128 kB' 'Mapped: 49096 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176184 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84576 kB' 'KernelStack: 6528 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 360316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.613 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.613 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:23.614 21:17:12 -- setup/common.sh@33 -- # echo 1024 00:14:23.614 21:17:12 -- setup/common.sh@33 -- # return 0 00:14:23.614 21:17:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:23.614 21:17:12 -- setup/hugepages.sh@112 -- # get_nodes 00:14:23.614 21:17:12 -- setup/hugepages.sh@27 -- # local node 00:14:23.614 21:17:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:23.614 21:17:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:23.614 21:17:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:23.614 21:17:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:23.614 21:17:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:23.614 21:17:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:23.614 21:17:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:23.614 21:17:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:23.614 21:17:12 -- setup/common.sh@18 -- # local node=0 00:14:23.614 21:17:12 -- setup/common.sh@19 -- # local var val 00:14:23.614 21:17:12 -- setup/common.sh@20 -- # local mem_f mem 00:14:23.614 21:17:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:23.614 21:17:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:23.614 21:17:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:23.614 21:17:12 -- setup/common.sh@28 -- # mapfile -t mem 00:14:23.614 21:17:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6115116 kB' 'MemUsed: 6126860 kB' 'SwapCached: 0 kB' 'Active: 893288 kB' 'Inactive: 2810060 kB' 'Active(anon): 134028 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1244 kB' 'Writeback: 0 kB' 'FilePages: 3579788 kB' 'Mapped: 49096 kB' 'AnonPages: 125128 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91608 kB' 'Slab: 176184 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.614 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.614 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # continue 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # IFS=': ' 00:14:23.615 21:17:12 -- setup/common.sh@31 -- # read -r var val _ 00:14:23.615 21:17:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:23.615 21:17:12 -- setup/common.sh@33 -- # echo 0 00:14:23.615 21:17:12 -- setup/common.sh@33 -- # return 0 00:14:23.615 21:17:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:23.615 21:17:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:23.615 21:17:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:23.615 21:17:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:23.615 node0=1024 expecting 1024 00:14:23.615 21:17:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:23.615 21:17:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:23.615 00:14:23.615 real 0m0.563s 00:14:23.615 user 0m0.268s 00:14:23.615 sys 0m0.311s 00:14:23.615 21:17:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:23.615 21:17:12 -- common/autotest_common.sh@10 -- # set +x 00:14:23.615 ************************************ 00:14:23.615 END TEST even_2G_alloc 00:14:23.615 ************************************ 00:14:23.887 21:17:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:14:23.887 21:17:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:23.887 21:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:23.887 21:17:12 -- common/autotest_common.sh@10 -- # set +x 00:14:23.887 ************************************ 00:14:23.887 START TEST odd_alloc 00:14:23.887 ************************************ 00:14:23.887 21:17:12 -- common/autotest_common.sh@1111 -- # odd_alloc 00:14:23.887 21:17:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:14:23.887 21:17:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:14:23.887 21:17:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:23.887 21:17:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:23.887 21:17:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:14:23.887 21:17:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:23.887 21:17:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:23.887 21:17:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:23.887 21:17:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:14:23.887 21:17:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:23.887 21:17:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:23.887 21:17:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:23.887 21:17:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:23.887 21:17:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:23.887 21:17:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:23.887 21:17:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:14:23.887 21:17:12 -- setup/hugepages.sh@83 -- # : 0 00:14:23.887 21:17:12 -- setup/hugepages.sh@84 -- # : 0 00:14:23.887 21:17:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:23.887 21:17:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:14:23.887 21:17:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:14:23.887 21:17:12 -- setup/hugepages.sh@160 -- # setup output 00:14:23.887 21:17:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:23.887 21:17:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:24.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:24.151 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:24.151 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:24.412 21:17:13 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:14:24.412 21:17:13 -- setup/hugepages.sh@89 -- # local node 00:14:24.412 21:17:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:24.412 21:17:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:24.412 21:17:13 -- setup/hugepages.sh@92 -- # local surp 00:14:24.412 21:17:13 -- setup/hugepages.sh@93 -- # local resv 00:14:24.412 21:17:13 -- setup/hugepages.sh@94 -- # local anon 00:14:24.412 21:17:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:24.412 21:17:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:24.412 21:17:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:24.412 21:17:13 -- setup/common.sh@18 -- # local node= 00:14:24.412 21:17:13 -- setup/common.sh@19 -- # local var val 00:14:24.412 21:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:14:24.412 21:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:24.412 21:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:24.412 21:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:24.412 21:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:14:24.412 21:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:24.412 21:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6110696 kB' 'MemAvailable: 9488880 kB' 'Buffers: 2436 kB' 'Cached: 3577352 kB' 'SwapCached: 0 kB' 'Active: 893420 kB' 'Inactive: 2810060 kB' 'Active(anon): 134160 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1384 kB' 'Writeback: 0 kB' 'AnonPages: 125228 kB' 'Mapped: 49108 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176228 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84620 kB' 'KernelStack: 6544 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 360484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.412 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.412 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:24.413 21:17:13 -- setup/common.sh@33 -- # echo 0 00:14:24.413 21:17:13 -- setup/common.sh@33 -- # return 0 00:14:24.413 21:17:13 -- setup/hugepages.sh@97 -- # anon=0 00:14:24.413 21:17:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:24.413 21:17:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:24.413 21:17:13 -- setup/common.sh@18 -- # local node= 00:14:24.413 21:17:13 -- setup/common.sh@19 -- # local var val 00:14:24.413 21:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:14:24.413 21:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:24.413 21:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:24.413 21:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:24.413 21:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:14:24.413 21:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6110696 kB' 'MemAvailable: 9488880 kB' 'Buffers: 2436 kB' 'Cached: 3577352 kB' 'SwapCached: 0 kB' 'Active: 893508 kB' 'Inactive: 2810060 kB' 'Active(anon): 134248 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1384 kB' 'Writeback: 0 kB' 'AnonPages: 125360 kB' 'Mapped: 49108 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176228 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84620 kB' 'KernelStack: 6544 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 360484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.413 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.413 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.414 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.414 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.414 21:17:13 -- setup/common.sh@33 -- # echo 0 00:14:24.414 21:17:13 -- setup/common.sh@33 -- # return 0 00:14:24.414 21:17:13 -- setup/hugepages.sh@99 -- # surp=0 00:14:24.414 21:17:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:24.414 21:17:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:24.414 21:17:13 -- setup/common.sh@18 -- # local node= 00:14:24.414 21:17:13 -- setup/common.sh@19 -- # local var val 00:14:24.414 21:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:14:24.414 21:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:24.414 21:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:24.414 21:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:24.414 21:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:14:24.414 21:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:24.415 21:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6110696 kB' 'MemAvailable: 9488880 kB' 'Buffers: 2436 kB' 'Cached: 3577352 kB' 'SwapCached: 0 kB' 'Active: 893348 kB' 'Inactive: 2810060 kB' 'Active(anon): 134088 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1384 kB' 'Writeback: 0 kB' 'AnonPages: 125188 kB' 'Mapped: 49108 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176228 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84620 kB' 'KernelStack: 6528 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 360484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.415 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.415 21:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:24.416 21:17:13 -- setup/common.sh@33 -- # echo 0 00:14:24.416 21:17:13 -- setup/common.sh@33 -- # return 0 00:14:24.416 21:17:13 -- setup/hugepages.sh@100 -- # resv=0 00:14:24.416 nr_hugepages=1025 00:14:24.416 21:17:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:14:24.416 resv_hugepages=0 00:14:24.416 21:17:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:24.416 surplus_hugepages=0 00:14:24.416 21:17:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:24.416 anon_hugepages=0 00:14:24.416 21:17:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:24.416 21:17:13 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:24.416 21:17:13 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:14:24.416 21:17:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:24.416 21:17:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:24.416 21:17:13 -- setup/common.sh@18 -- # local node= 00:14:24.416 21:17:13 -- setup/common.sh@19 -- # local var val 00:14:24.416 21:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:14:24.416 21:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:24.416 21:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:24.416 21:17:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:24.416 21:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:14:24.416 21:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:24.416 21:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6111216 kB' 'MemAvailable: 9489400 kB' 'Buffers: 2436 kB' 'Cached: 3577352 kB' 'SwapCached: 0 kB' 'Active: 893344 kB' 'Inactive: 2810060 kB' 'Active(anon): 134084 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1384 kB' 'Writeback: 0 kB' 'AnonPages: 125188 kB' 'Mapped: 49108 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176228 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84620 kB' 'KernelStack: 6528 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 360484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.416 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.416 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.417 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:24.417 21:17:13 -- setup/common.sh@33 -- # echo 1025 00:14:24.417 21:17:13 -- setup/common.sh@33 -- # return 0 00:14:24.417 21:17:13 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:14:24.417 21:17:13 -- setup/hugepages.sh@112 -- # get_nodes 00:14:24.417 21:17:13 -- setup/hugepages.sh@27 -- # local node 00:14:24.417 21:17:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:24.417 21:17:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:14:24.417 21:17:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:24.417 21:17:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:24.417 21:17:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:24.417 21:17:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:24.417 21:17:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:24.417 21:17:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:24.417 21:17:13 -- setup/common.sh@18 -- # local node=0 00:14:24.417 21:17:13 -- setup/common.sh@19 -- # local var val 00:14:24.417 21:17:13 -- setup/common.sh@20 -- # local mem_f mem 00:14:24.417 21:17:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:24.417 21:17:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:24.417 21:17:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:24.417 21:17:13 -- setup/common.sh@28 -- # mapfile -t mem 00:14:24.417 21:17:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:24.417 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6111216 kB' 'MemUsed: 6130760 kB' 'SwapCached: 0 kB' 'Active: 893344 kB' 'Inactive: 2810060 kB' 'Active(anon): 134084 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1384 kB' 'Writeback: 0 kB' 'FilePages: 3579788 kB' 'Mapped: 49108 kB' 'AnonPages: 125188 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91608 kB' 'Slab: 176228 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # continue 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # IFS=': ' 00:14:24.418 21:17:13 -- setup/common.sh@31 -- # read -r var val _ 00:14:24.418 21:17:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:24.418 21:17:13 -- setup/common.sh@33 -- # echo 0 00:14:24.418 21:17:13 -- setup/common.sh@33 -- # return 0 00:14:24.418 21:17:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:24.418 21:17:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:24.418 21:17:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:24.418 21:17:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:24.418 node0=1025 expecting 1025 00:14:24.418 21:17:13 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:14:24.418 21:17:13 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:14:24.418 00:14:24.418 real 0m0.646s 00:14:24.418 user 0m0.291s 00:14:24.418 sys 0m0.399s 00:14:24.418 21:17:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:24.419 21:17:13 -- common/autotest_common.sh@10 -- # set +x 00:14:24.419 ************************************ 00:14:24.419 END TEST odd_alloc 00:14:24.419 ************************************ 00:14:24.419 21:17:13 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:14:24.419 21:17:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:24.419 21:17:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:24.419 21:17:13 -- common/autotest_common.sh@10 -- # set +x 00:14:24.678 ************************************ 00:14:24.678 START TEST custom_alloc 00:14:24.678 ************************************ 00:14:24.678 21:17:13 -- common/autotest_common.sh@1111 -- # custom_alloc 00:14:24.678 21:17:13 -- setup/hugepages.sh@167 -- # local IFS=, 00:14:24.678 21:17:13 -- setup/hugepages.sh@169 -- # local node 00:14:24.678 21:17:13 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:14:24.678 21:17:13 -- setup/hugepages.sh@170 -- # local nodes_hp 00:14:24.678 21:17:13 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:14:24.678 21:17:13 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:14:24.678 21:17:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:14:24.678 21:17:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:14:24.678 21:17:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:14:24.678 21:17:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:24.678 21:17:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:24.678 21:17:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:24.678 21:17:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:24.678 21:17:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:24.678 21:17:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:24.678 21:17:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:14:24.678 21:17:13 -- setup/hugepages.sh@83 -- # : 0 00:14:24.678 21:17:13 -- setup/hugepages.sh@84 -- # : 0 00:14:24.678 21:17:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:14:24.678 21:17:13 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:14:24.678 21:17:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:14:24.678 21:17:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:14:24.678 21:17:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:14:24.678 21:17:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:24.678 21:17:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:14:24.678 21:17:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:24.678 21:17:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:24.678 21:17:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:24.678 21:17:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:14:24.678 21:17:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:14:24.678 21:17:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:14:24.678 21:17:13 -- setup/hugepages.sh@78 -- # return 0 00:14:24.678 21:17:13 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:14:24.678 21:17:13 -- setup/hugepages.sh@187 -- # setup output 00:14:24.678 21:17:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:24.678 21:17:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:24.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:24.935 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:24.935 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:25.199 21:17:14 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:14:25.199 21:17:14 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:14:25.199 21:17:14 -- setup/hugepages.sh@89 -- # local node 00:14:25.199 21:17:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:25.199 21:17:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:25.199 21:17:14 -- setup/hugepages.sh@92 -- # local surp 00:14:25.199 21:17:14 -- setup/hugepages.sh@93 -- # local resv 00:14:25.199 21:17:14 -- setup/hugepages.sh@94 -- # local anon 00:14:25.199 21:17:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:25.199 21:17:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:25.199 21:17:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:25.199 21:17:14 -- setup/common.sh@18 -- # local node= 00:14:25.199 21:17:14 -- setup/common.sh@19 -- # local var val 00:14:25.199 21:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.199 21:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.199 21:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:25.199 21:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:25.199 21:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.199 21:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.199 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.199 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7165752 kB' 'MemAvailable: 10543940 kB' 'Buffers: 2436 kB' 'Cached: 3577356 kB' 'SwapCached: 0 kB' 'Active: 893336 kB' 'Inactive: 2810064 kB' 'Active(anon): 134076 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1540 kB' 'Writeback: 0 kB' 'AnonPages: 125464 kB' 'Mapped: 49232 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176252 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84644 kB' 'KernelStack: 6500 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 360652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.200 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.200 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.201 21:17:14 -- setup/common.sh@33 -- # echo 0 00:14:25.201 21:17:14 -- setup/common.sh@33 -- # return 0 00:14:25.201 21:17:14 -- setup/hugepages.sh@97 -- # anon=0 00:14:25.201 21:17:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:25.201 21:17:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:25.201 21:17:14 -- setup/common.sh@18 -- # local node= 00:14:25.201 21:17:14 -- setup/common.sh@19 -- # local var val 00:14:25.201 21:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.201 21:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.201 21:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:25.201 21:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:25.201 21:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.201 21:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7165752 kB' 'MemAvailable: 10543940 kB' 'Buffers: 2436 kB' 'Cached: 3577356 kB' 'SwapCached: 0 kB' 'Active: 893476 kB' 'Inactive: 2810064 kB' 'Active(anon): 134216 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1540 kB' 'Writeback: 0 kB' 'AnonPages: 125316 kB' 'Mapped: 49116 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176252 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84644 kB' 'KernelStack: 6528 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 360652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.201 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.201 21:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.202 21:17:14 -- setup/common.sh@33 -- # echo 0 00:14:25.202 21:17:14 -- setup/common.sh@33 -- # return 0 00:14:25.202 21:17:14 -- setup/hugepages.sh@99 -- # surp=0 00:14:25.202 21:17:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:25.202 21:17:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:25.202 21:17:14 -- setup/common.sh@18 -- # local node= 00:14:25.202 21:17:14 -- setup/common.sh@19 -- # local var val 00:14:25.202 21:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.202 21:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.202 21:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:25.202 21:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:25.202 21:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.202 21:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7165752 kB' 'MemAvailable: 10543940 kB' 'Buffers: 2436 kB' 'Cached: 3577356 kB' 'SwapCached: 0 kB' 'Active: 893380 kB' 'Inactive: 2810064 kB' 'Active(anon): 134120 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1280 kB' 'Writeback: 0 kB' 'AnonPages: 125244 kB' 'Mapped: 49116 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176252 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84644 kB' 'KernelStack: 6528 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 360652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.202 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.202 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.203 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.203 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.203 21:17:14 -- setup/common.sh@33 -- # echo 0 00:14:25.203 21:17:14 -- setup/common.sh@33 -- # return 0 00:14:25.203 21:17:14 -- setup/hugepages.sh@100 -- # resv=0 00:14:25.203 nr_hugepages=512 00:14:25.203 21:17:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:14:25.203 resv_hugepages=0 00:14:25.203 21:17:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:25.203 surplus_hugepages=0 00:14:25.203 21:17:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:25.203 anon_hugepages=0 00:14:25.203 21:17:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:25.203 21:17:14 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:25.203 21:17:14 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:14:25.203 21:17:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:25.203 21:17:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:25.203 21:17:14 -- setup/common.sh@18 -- # local node= 00:14:25.203 21:17:14 -- setup/common.sh@19 -- # local var val 00:14:25.203 21:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.203 21:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.204 21:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:25.204 21:17:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:25.204 21:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.204 21:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7165752 kB' 'MemAvailable: 10543940 kB' 'Buffers: 2436 kB' 'Cached: 3577356 kB' 'SwapCached: 0 kB' 'Active: 893412 kB' 'Inactive: 2810064 kB' 'Active(anon): 134152 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 125244 kB' 'Mapped: 49116 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176252 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84644 kB' 'KernelStack: 6528 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 360652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.204 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.204 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.205 21:17:14 -- setup/common.sh@33 -- # echo 512 00:14:25.205 21:17:14 -- setup/common.sh@33 -- # return 0 00:14:25.205 21:17:14 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:14:25.205 21:17:14 -- setup/hugepages.sh@112 -- # get_nodes 00:14:25.205 21:17:14 -- setup/hugepages.sh@27 -- # local node 00:14:25.205 21:17:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:25.205 21:17:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:14:25.205 21:17:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:25.205 21:17:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:25.205 21:17:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:25.205 21:17:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:25.205 21:17:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:25.205 21:17:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:25.205 21:17:14 -- setup/common.sh@18 -- # local node=0 00:14:25.205 21:17:14 -- setup/common.sh@19 -- # local var val 00:14:25.205 21:17:14 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.205 21:17:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.205 21:17:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:25.205 21:17:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:25.205 21:17:14 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.205 21:17:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7165752 kB' 'MemUsed: 5076224 kB' 'SwapCached: 0 kB' 'Active: 893436 kB' 'Inactive: 2810064 kB' 'Active(anon): 134176 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810064 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 3579792 kB' 'Mapped: 49116 kB' 'AnonPages: 125356 kB' 'Shmem: 10468 kB' 'KernelStack: 6544 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91608 kB' 'Slab: 176252 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.205 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.205 21:17:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # continue 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.206 21:17:14 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.206 21:17:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.206 21:17:14 -- setup/common.sh@33 -- # echo 0 00:14:25.206 21:17:14 -- setup/common.sh@33 -- # return 0 00:14:25.206 21:17:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:25.206 21:17:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:25.206 21:17:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:25.206 21:17:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:25.206 node0=512 expecting 512 00:14:25.206 21:17:14 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:14:25.206 21:17:14 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:14:25.206 00:14:25.206 real 0m0.676s 00:14:25.206 user 0m0.338s 00:14:25.206 sys 0m0.383s 00:14:25.206 21:17:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:25.206 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:25.206 ************************************ 00:14:25.206 END TEST custom_alloc 00:14:25.206 ************************************ 00:14:25.466 21:17:14 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:14:25.466 21:17:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:25.467 21:17:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.467 21:17:14 -- common/autotest_common.sh@10 -- # set +x 00:14:25.467 ************************************ 00:14:25.467 START TEST no_shrink_alloc 00:14:25.467 ************************************ 00:14:25.467 21:17:14 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:14:25.467 21:17:14 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:14:25.467 21:17:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:25.467 21:17:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:25.467 21:17:14 -- setup/hugepages.sh@51 -- # shift 00:14:25.467 21:17:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:25.467 21:17:14 -- setup/hugepages.sh@52 -- # local node_ids 00:14:25.467 21:17:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:25.467 21:17:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:25.467 21:17:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:25.467 21:17:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:25.467 21:17:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:25.467 21:17:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:25.467 21:17:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:25.467 21:17:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:25.467 21:17:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:25.467 21:17:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:25.467 21:17:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:25.467 21:17:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:25.467 21:17:14 -- setup/hugepages.sh@73 -- # return 0 00:14:25.467 21:17:14 -- setup/hugepages.sh@198 -- # setup output 00:14:25.467 21:17:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:25.467 21:17:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:25.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:25.988 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:25.988 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:25.988 21:17:15 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:14:25.988 21:17:15 -- setup/hugepages.sh@89 -- # local node 00:14:25.988 21:17:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:25.988 21:17:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:25.988 21:17:15 -- setup/hugepages.sh@92 -- # local surp 00:14:25.988 21:17:15 -- setup/hugepages.sh@93 -- # local resv 00:14:25.988 21:17:15 -- setup/hugepages.sh@94 -- # local anon 00:14:25.988 21:17:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:25.988 21:17:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:25.988 21:17:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:25.988 21:17:15 -- setup/common.sh@18 -- # local node= 00:14:25.988 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:25.988 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.988 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.988 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:25.988 21:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:25.988 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.988 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.988 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.988 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6120516 kB' 'MemAvailable: 9498708 kB' 'Buffers: 2436 kB' 'Cached: 3577360 kB' 'SwapCached: 0 kB' 'Active: 888252 kB' 'Inactive: 2810068 kB' 'Active(anon): 128992 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 120108 kB' 'Mapped: 48712 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176120 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84512 kB' 'KernelStack: 6452 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 341472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:25.989 21:17:15 -- setup/common.sh@33 -- # echo 0 00:14:25.989 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:25.989 21:17:15 -- setup/hugepages.sh@97 -- # anon=0 00:14:25.989 21:17:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:25.989 21:17:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:25.989 21:17:15 -- setup/common.sh@18 -- # local node= 00:14:25.989 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:25.989 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.989 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.989 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:25.989 21:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:25.989 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.989 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.989 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6120968 kB' 'MemAvailable: 9499160 kB' 'Buffers: 2436 kB' 'Cached: 3577360 kB' 'SwapCached: 0 kB' 'Active: 888284 kB' 'Inactive: 2810068 kB' 'Active(anon): 129024 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 120172 kB' 'Mapped: 48396 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176116 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84508 kB' 'KernelStack: 6384 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.989 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.989 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.990 21:17:15 -- setup/common.sh@33 -- # echo 0 00:14:25.990 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:25.990 21:17:15 -- setup/hugepages.sh@99 -- # surp=0 00:14:25.990 21:17:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:25.990 21:17:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:25.990 21:17:15 -- setup/common.sh@18 -- # local node= 00:14:25.990 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:25.990 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.990 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.990 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:25.990 21:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:25.990 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.990 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6120968 kB' 'MemAvailable: 9499160 kB' 'Buffers: 2436 kB' 'Cached: 3577360 kB' 'SwapCached: 0 kB' 'Active: 887828 kB' 'Inactive: 2810068 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 119768 kB' 'Mapped: 48388 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176116 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84508 kB' 'KernelStack: 6400 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:25.990 21:17:15 -- setup/common.sh@33 -- # echo 0 00:14:25.990 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:25.990 21:17:15 -- setup/hugepages.sh@100 -- # resv=0 00:14:25.990 nr_hugepages=1024 00:14:25.990 21:17:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:25.990 resv_hugepages=0 00:14:25.990 21:17:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:25.990 surplus_hugepages=0 00:14:25.990 21:17:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:25.990 anon_hugepages=0 00:14:25.990 21:17:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:25.990 21:17:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:25.990 21:17:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:25.990 21:17:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:25.990 21:17:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:25.990 21:17:15 -- setup/common.sh@18 -- # local node= 00:14:25.990 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:25.990 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.990 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.990 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:25.990 21:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:25.990 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.990 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6120968 kB' 'MemAvailable: 9499160 kB' 'Buffers: 2436 kB' 'Cached: 3577360 kB' 'SwapCached: 0 kB' 'Active: 887828 kB' 'Inactive: 2810068 kB' 'Active(anon): 128568 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 119768 kB' 'Mapped: 48388 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176116 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84508 kB' 'KernelStack: 6400 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.990 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.990 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:25.991 21:17:15 -- setup/common.sh@33 -- # echo 1024 00:14:25.991 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:25.991 21:17:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:25.991 21:17:15 -- setup/hugepages.sh@112 -- # get_nodes 00:14:25.991 21:17:15 -- setup/hugepages.sh@27 -- # local node 00:14:25.991 21:17:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:25.991 21:17:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:25.991 21:17:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:25.991 21:17:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:25.991 21:17:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:25.991 21:17:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:25.991 21:17:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:25.991 21:17:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:25.991 21:17:15 -- setup/common.sh@18 -- # local node=0 00:14:25.991 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:25.991 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:25.991 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:25.991 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:25.991 21:17:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:25.991 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:25.991 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:25.991 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6121604 kB' 'MemUsed: 6120372 kB' 'SwapCached: 0 kB' 'Active: 888000 kB' 'Inactive: 2810068 kB' 'Active(anon): 128740 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'FilePages: 3579796 kB' 'Mapped: 48388 kB' 'AnonPages: 119940 kB' 'Shmem: 10468 kB' 'KernelStack: 6368 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91608 kB' 'Slab: 176116 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # continue 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:25.991 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:25.991 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:25.991 21:17:15 -- setup/common.sh@33 -- # echo 0 00:14:25.991 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:25.991 21:17:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:25.991 21:17:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:25.991 21:17:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:25.991 21:17:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:25.991 node0=1024 expecting 1024 00:14:25.991 21:17:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:25.991 21:17:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:25.991 21:17:15 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:14:25.991 21:17:15 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:14:25.991 21:17:15 -- setup/hugepages.sh@202 -- # setup output 00:14:25.991 21:17:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:25.991 21:17:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:26.563 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:26.563 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:26.563 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:26.563 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:14:26.563 21:17:15 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:14:26.563 21:17:15 -- setup/hugepages.sh@89 -- # local node 00:14:26.563 21:17:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:14:26.563 21:17:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:14:26.563 21:17:15 -- setup/hugepages.sh@92 -- # local surp 00:14:26.563 21:17:15 -- setup/hugepages.sh@93 -- # local resv 00:14:26.563 21:17:15 -- setup/hugepages.sh@94 -- # local anon 00:14:26.564 21:17:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:14:26.564 21:17:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:14:26.564 21:17:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:14:26.564 21:17:15 -- setup/common.sh@18 -- # local node= 00:14:26.564 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:26.564 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:26.564 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.564 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:26.564 21:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:26.564 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.564 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6122312 kB' 'MemAvailable: 9500504 kB' 'Buffers: 2436 kB' 'Cached: 3577360 kB' 'SwapCached: 0 kB' 'Active: 888488 kB' 'Inactive: 2810068 kB' 'Active(anon): 129228 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 708 kB' 'Writeback: 0 kB' 'AnonPages: 120352 kB' 'Mapped: 48504 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176104 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84496 kB' 'KernelStack: 6388 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.564 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.564 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:14:26.565 21:17:15 -- setup/common.sh@33 -- # echo 0 00:14:26.565 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:26.565 21:17:15 -- setup/hugepages.sh@97 -- # anon=0 00:14:26.565 21:17:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:14:26.565 21:17:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:26.565 21:17:15 -- setup/common.sh@18 -- # local node= 00:14:26.565 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:26.565 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:26.565 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.565 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:26.565 21:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:26.565 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.565 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.565 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6122744 kB' 'MemAvailable: 9500936 kB' 'Buffers: 2436 kB' 'Cached: 3577360 kB' 'SwapCached: 0 kB' 'Active: 887996 kB' 'Inactive: 2810068 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 708 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 48388 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176100 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84492 kB' 'KernelStack: 6400 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.565 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.565 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.566 21:17:15 -- setup/common.sh@33 -- # echo 0 00:14:26.566 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:26.566 21:17:15 -- setup/hugepages.sh@99 -- # surp=0 00:14:26.566 21:17:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:14:26.566 21:17:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:14:26.566 21:17:15 -- setup/common.sh@18 -- # local node= 00:14:26.566 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:26.566 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:26.566 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.566 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:26.566 21:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:26.566 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.566 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6122744 kB' 'MemAvailable: 9500936 kB' 'Buffers: 2436 kB' 'Cached: 3577360 kB' 'SwapCached: 0 kB' 'Active: 888000 kB' 'Inactive: 2810068 kB' 'Active(anon): 128740 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 708 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 48388 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176100 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84492 kB' 'KernelStack: 6400 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.566 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.566 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:14:26.567 21:17:15 -- setup/common.sh@33 -- # echo 0 00:14:26.567 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:26.567 21:17:15 -- setup/hugepages.sh@100 -- # resv=0 00:14:26.567 nr_hugepages=1024 00:14:26.567 21:17:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:14:26.567 resv_hugepages=0 00:14:26.567 21:17:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:14:26.567 surplus_hugepages=0 00:14:26.567 21:17:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:14:26.567 anon_hugepages=0 00:14:26.567 21:17:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:14:26.567 21:17:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:26.567 21:17:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:14:26.567 21:17:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:14:26.567 21:17:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:14:26.567 21:17:15 -- setup/common.sh@18 -- # local node= 00:14:26.567 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:26.567 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:26.567 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.567 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:26.567 21:17:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:26.567 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.567 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6123076 kB' 'MemAvailable: 9501268 kB' 'Buffers: 2436 kB' 'Cached: 3577360 kB' 'SwapCached: 0 kB' 'Active: 887740 kB' 'Inactive: 2810068 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 708 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 48388 kB' 'Shmem: 10468 kB' 'KReclaimable: 91608 kB' 'Slab: 176100 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84492 kB' 'KernelStack: 6400 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 6111232 kB' 'DirectMap1G: 8388608 kB' 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.567 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.567 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.568 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.568 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:14:26.569 21:17:15 -- setup/common.sh@33 -- # echo 1024 00:14:26.569 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:26.569 21:17:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:14:26.569 21:17:15 -- setup/hugepages.sh@112 -- # get_nodes 00:14:26.569 21:17:15 -- setup/hugepages.sh@27 -- # local node 00:14:26.569 21:17:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:26.569 21:17:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:14:26.569 21:17:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:26.569 21:17:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:26.569 21:17:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:14:26.569 21:17:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:14:26.569 21:17:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:14:26.569 21:17:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:14:26.569 21:17:15 -- setup/common.sh@18 -- # local node=0 00:14:26.569 21:17:15 -- setup/common.sh@19 -- # local var val 00:14:26.569 21:17:15 -- setup/common.sh@20 -- # local mem_f mem 00:14:26.569 21:17:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:26.569 21:17:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:14:26.569 21:17:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:14:26.569 21:17:15 -- setup/common.sh@28 -- # mapfile -t mem 00:14:26.569 21:17:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:26.569 21:17:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6123076 kB' 'MemUsed: 6118900 kB' 'SwapCached: 0 kB' 'Active: 887744 kB' 'Inactive: 2810068 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 759260 kB' 'Inactive(file): 2810068 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 708 kB' 'Writeback: 0 kB' 'FilePages: 3579796 kB' 'Mapped: 48388 kB' 'AnonPages: 119844 kB' 'Shmem: 10468 kB' 'KernelStack: 6400 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91608 kB' 'Slab: 176100 kB' 'SReclaimable: 91608 kB' 'SUnreclaim: 84492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.569 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.569 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.830 21:17:15 -- setup/common.sh@32 -- # continue 00:14:26.830 21:17:15 -- setup/common.sh@31 -- # IFS=': ' 00:14:26.831 21:17:15 -- setup/common.sh@31 -- # read -r var val _ 00:14:26.831 21:17:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:14:26.831 21:17:15 -- setup/common.sh@33 -- # echo 0 00:14:26.831 21:17:15 -- setup/common.sh@33 -- # return 0 00:14:26.831 21:17:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:14:26.831 21:17:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:14:26.831 21:17:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:14:26.831 21:17:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:14:26.831 node0=1024 expecting 1024 00:14:26.831 21:17:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:14:26.831 21:17:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:14:26.831 00:14:26.831 real 0m1.274s 00:14:26.831 user 0m0.557s 00:14:26.831 sys 0m0.799s 00:14:26.831 21:17:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:26.831 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:26.831 ************************************ 00:14:26.831 END TEST no_shrink_alloc 00:14:26.831 ************************************ 00:14:26.831 21:17:15 -- setup/hugepages.sh@217 -- # clear_hp 00:14:26.831 21:17:15 -- setup/hugepages.sh@37 -- # local node hp 00:14:26.831 21:17:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:26.831 21:17:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:26.831 21:17:15 -- setup/hugepages.sh@41 -- # echo 0 00:14:26.831 21:17:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:26.831 21:17:15 -- setup/hugepages.sh@41 -- # echo 0 00:14:26.831 21:17:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:26.831 21:17:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:26.831 00:14:26.831 real 0m6.047s 00:14:26.831 user 0m2.622s 00:14:26.831 sys 0m3.515s 00:14:26.831 21:17:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:26.831 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:26.831 ************************************ 00:14:26.831 END TEST hugepages 00:14:26.831 ************************************ 00:14:26.831 21:17:15 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:26.831 21:17:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:26.831 21:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:26.831 21:17:15 -- common/autotest_common.sh@10 -- # set +x 00:14:26.831 ************************************ 00:14:26.831 START TEST driver 00:14:26.831 ************************************ 00:14:26.831 21:17:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:14:27.091 * Looking for test storage... 00:14:27.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:27.091 21:17:16 -- setup/driver.sh@68 -- # setup reset 00:14:27.091 21:17:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:27.091 21:17:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:28.031 21:17:16 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:14:28.031 21:17:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:28.031 21:17:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.031 21:17:16 -- common/autotest_common.sh@10 -- # set +x 00:14:28.031 ************************************ 00:14:28.031 START TEST guess_driver 00:14:28.031 ************************************ 00:14:28.031 21:17:17 -- common/autotest_common.sh@1111 -- # guess_driver 00:14:28.031 21:17:17 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:14:28.031 21:17:17 -- setup/driver.sh@47 -- # local fail=0 00:14:28.031 21:17:17 -- setup/driver.sh@49 -- # pick_driver 00:14:28.031 21:17:17 -- setup/driver.sh@36 -- # vfio 00:14:28.031 21:17:17 -- setup/driver.sh@21 -- # local iommu_grups 00:14:28.031 21:17:17 -- setup/driver.sh@22 -- # local unsafe_vfio 00:14:28.031 21:17:17 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:14:28.031 21:17:17 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:14:28.031 21:17:17 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:14:28.031 21:17:17 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:14:28.031 21:17:17 -- setup/driver.sh@32 -- # return 1 00:14:28.031 21:17:17 -- setup/driver.sh@38 -- # uio 00:14:28.031 21:17:17 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:14:28.031 21:17:17 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:14:28.031 21:17:17 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:14:28.031 21:17:17 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:14:28.031 21:17:17 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:14:28.031 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:14:28.031 21:17:17 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:14:28.031 Looking for driver=uio_pci_generic 00:14:28.031 21:17:17 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:14:28.031 21:17:17 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:14:28.031 21:17:17 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:14:28.031 21:17:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:28.031 21:17:17 -- setup/driver.sh@45 -- # setup output config 00:14:28.031 21:17:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:28.031 21:17:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:28.600 21:17:17 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:14:28.600 21:17:17 -- setup/driver.sh@58 -- # continue 00:14:28.600 21:17:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:28.857 21:17:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:28.857 21:17:17 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:28.857 21:17:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:28.857 21:17:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:14:28.857 21:17:17 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:14:28.857 21:17:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:14:28.857 21:17:18 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:14:28.857 21:17:18 -- setup/driver.sh@65 -- # setup reset 00:14:28.857 21:17:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:28.857 21:17:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:29.792 00:14:29.792 real 0m1.772s 00:14:29.792 user 0m0.635s 00:14:29.792 sys 0m1.216s 00:14:29.792 21:17:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:29.792 21:17:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.792 ************************************ 00:14:29.792 END TEST guess_driver 00:14:29.792 ************************************ 00:14:29.792 00:14:29.792 real 0m2.821s 00:14:29.792 user 0m0.979s 00:14:29.792 sys 0m2.004s 00:14:29.792 21:17:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:29.792 21:17:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.792 ************************************ 00:14:29.792 END TEST driver 00:14:29.792 ************************************ 00:14:29.792 21:17:18 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:29.792 21:17:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:29.792 21:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.792 21:17:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.792 ************************************ 00:14:29.792 START TEST devices 00:14:29.792 ************************************ 00:14:29.792 21:17:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:14:30.051 * Looking for test storage... 00:14:30.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:30.051 21:17:19 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:14:30.051 21:17:19 -- setup/devices.sh@192 -- # setup reset 00:14:30.051 21:17:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:30.051 21:17:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:30.990 21:17:20 -- setup/devices.sh@194 -- # get_zoned_devs 00:14:30.990 21:17:20 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:30.990 21:17:20 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:30.990 21:17:20 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:30.990 21:17:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:30.990 21:17:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:30.990 21:17:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:30.990 21:17:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:30.990 21:17:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:30.990 21:17:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:30.990 21:17:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:14:30.990 21:17:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:14:30.990 21:17:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:14:30.990 21:17:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:30.990 21:17:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:30.990 21:17:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:14:30.990 21:17:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:14:30.990 21:17:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:14:30.990 21:17:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:30.990 21:17:20 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:30.990 21:17:20 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:30.990 21:17:20 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:30.990 21:17:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:30.990 21:17:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:30.990 21:17:20 -- setup/devices.sh@196 -- # blocks=() 00:14:30.990 21:17:20 -- setup/devices.sh@196 -- # declare -a blocks 00:14:30.990 21:17:20 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:14:30.990 21:17:20 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:14:30.990 21:17:20 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:14:30.990 21:17:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:30.990 21:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:14:30.990 21:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:30.990 21:17:20 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:30.990 21:17:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:30.990 21:17:20 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:14:30.990 21:17:20 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:14:30.990 21:17:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:14:30.990 No valid GPT data, bailing 00:14:30.990 21:17:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:30.990 21:17:20 -- scripts/common.sh@391 -- # pt= 00:14:30.990 21:17:20 -- scripts/common.sh@392 -- # return 1 00:14:30.990 21:17:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:14:30.990 21:17:20 -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:30.990 21:17:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:30.990 21:17:20 -- setup/common.sh@80 -- # echo 4294967296 00:14:30.990 21:17:20 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:30.990 21:17:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:30.990 21:17:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:30.990 21:17:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:30.990 21:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:14:30.990 21:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:30.990 21:17:20 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:30.990 21:17:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:30.990 21:17:20 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:14:30.990 21:17:20 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:14:30.990 21:17:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:14:30.990 No valid GPT data, bailing 00:14:30.990 21:17:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:14:30.990 21:17:20 -- scripts/common.sh@391 -- # pt= 00:14:30.990 21:17:20 -- scripts/common.sh@392 -- # return 1 00:14:30.990 21:17:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:14:30.990 21:17:20 -- setup/common.sh@76 -- # local dev=nvme0n2 00:14:30.990 21:17:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:14:30.990 21:17:20 -- setup/common.sh@80 -- # echo 4294967296 00:14:30.990 21:17:20 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:30.990 21:17:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:30.990 21:17:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:30.990 21:17:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:30.990 21:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:14:30.990 21:17:20 -- setup/devices.sh@201 -- # ctrl=nvme0 00:14:30.990 21:17:20 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:14:30.990 21:17:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:30.990 21:17:20 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:14:30.990 21:17:20 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:14:30.990 21:17:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:14:31.248 No valid GPT data, bailing 00:14:31.248 21:17:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:14:31.248 21:17:20 -- scripts/common.sh@391 -- # pt= 00:14:31.248 21:17:20 -- scripts/common.sh@392 -- # return 1 00:14:31.248 21:17:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:14:31.248 21:17:20 -- setup/common.sh@76 -- # local dev=nvme0n3 00:14:31.248 21:17:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:14:31.248 21:17:20 -- setup/common.sh@80 -- # echo 4294967296 00:14:31.248 21:17:20 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:14:31.248 21:17:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:31.248 21:17:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:14:31.248 21:17:20 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:14:31.248 21:17:20 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:14:31.248 21:17:20 -- setup/devices.sh@201 -- # ctrl=nvme1 00:14:31.248 21:17:20 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:14:31.248 21:17:20 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:31.248 21:17:20 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:14:31.248 21:17:20 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:14:31.248 21:17:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:14:31.248 No valid GPT data, bailing 00:14:31.248 21:17:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:31.248 21:17:20 -- scripts/common.sh@391 -- # pt= 00:14:31.248 21:17:20 -- scripts/common.sh@392 -- # return 1 00:14:31.249 21:17:20 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:14:31.249 21:17:20 -- setup/common.sh@76 -- # local dev=nvme1n1 00:14:31.249 21:17:20 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:14:31.249 21:17:20 -- setup/common.sh@80 -- # echo 5368709120 00:14:31.249 21:17:20 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:14:31.249 21:17:20 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:14:31.249 21:17:20 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:14:31.249 21:17:20 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:14:31.249 21:17:20 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:14:31.249 21:17:20 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:14:31.249 21:17:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.249 21:17:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.249 21:17:20 -- common/autotest_common.sh@10 -- # set +x 00:14:31.249 ************************************ 00:14:31.249 START TEST nvme_mount 00:14:31.249 ************************************ 00:14:31.249 21:17:20 -- common/autotest_common.sh@1111 -- # nvme_mount 00:14:31.249 21:17:20 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:14:31.249 21:17:20 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:14:31.249 21:17:20 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:31.249 21:17:20 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:31.249 21:17:20 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:14:31.249 21:17:20 -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:31.249 21:17:20 -- setup/common.sh@40 -- # local part_no=1 00:14:31.249 21:17:20 -- setup/common.sh@41 -- # local size=1073741824 00:14:31.249 21:17:20 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:31.249 21:17:20 -- setup/common.sh@44 -- # parts=() 00:14:31.249 21:17:20 -- setup/common.sh@44 -- # local parts 00:14:31.249 21:17:20 -- setup/common.sh@46 -- # (( part = 1 )) 00:14:31.249 21:17:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:31.249 21:17:20 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:31.249 21:17:20 -- setup/common.sh@46 -- # (( part++ )) 00:14:31.249 21:17:20 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:31.249 21:17:20 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:31.249 21:17:20 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:31.249 21:17:20 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:14:32.625 Creating new GPT entries in memory. 00:14:32.625 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:32.625 other utilities. 00:14:32.625 21:17:21 -- setup/common.sh@57 -- # (( part = 1 )) 00:14:32.625 21:17:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:32.625 21:17:21 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:32.625 21:17:21 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:32.625 21:17:21 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:33.563 Creating new GPT entries in memory. 00:14:33.563 The operation has completed successfully. 00:14:33.563 21:17:22 -- setup/common.sh@57 -- # (( part++ )) 00:14:33.563 21:17:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:33.563 21:17:22 -- setup/common.sh@62 -- # wait 71356 00:14:33.563 21:17:22 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:33.563 21:17:22 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:14:33.563 21:17:22 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:33.563 21:17:22 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:14:33.563 21:17:22 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:14:33.563 21:17:22 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:33.563 21:17:22 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:33.563 21:17:22 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:33.563 21:17:22 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:14:33.563 21:17:22 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:33.563 21:17:22 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:33.563 21:17:22 -- setup/devices.sh@53 -- # local found=0 00:14:33.563 21:17:22 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:33.563 21:17:22 -- setup/devices.sh@56 -- # : 00:14:33.563 21:17:22 -- setup/devices.sh@59 -- # local pci status 00:14:33.563 21:17:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:33.563 21:17:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:33.563 21:17:22 -- setup/devices.sh@47 -- # setup output config 00:14:33.563 21:17:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:33.563 21:17:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:33.822 21:17:22 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:33.822 21:17:22 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:14:33.822 21:17:22 -- setup/devices.sh@63 -- # found=1 00:14:33.822 21:17:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:33.822 21:17:22 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:33.822 21:17:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.079 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.079 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.079 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.079 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.079 21:17:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:34.079 21:17:23 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:34.079 21:17:23 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.079 21:17:23 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:34.079 21:17:23 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:34.079 21:17:23 -- setup/devices.sh@110 -- # cleanup_nvme 00:14:34.079 21:17:23 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.079 21:17:23 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.336 21:17:23 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:34.336 21:17:23 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:34.336 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:34.336 21:17:23 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:34.336 21:17:23 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:34.596 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:34.596 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:34.596 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:34.596 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:34.596 21:17:23 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:14:34.596 21:17:23 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:14:34.596 21:17:23 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.596 21:17:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:14:34.596 21:17:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:14:34.596 21:17:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.596 21:17:23 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:34.596 21:17:23 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:34.596 21:17:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:14:34.596 21:17:23 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:34.596 21:17:23 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:34.596 21:17:23 -- setup/devices.sh@53 -- # local found=0 00:14:34.596 21:17:23 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:34.596 21:17:23 -- setup/devices.sh@56 -- # : 00:14:34.596 21:17:23 -- setup/devices.sh@59 -- # local pci status 00:14:34.596 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.596 21:17:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:34.596 21:17:23 -- setup/devices.sh@47 -- # setup output config 00:14:34.596 21:17:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:34.596 21:17:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:34.866 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.866 21:17:23 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:14:34.866 21:17:23 -- setup/devices.sh@63 -- # found=1 00:14:34.866 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:34.866 21:17:23 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:34.866 21:17:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.128 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.128 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.128 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.128 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.128 21:17:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:35.128 21:17:24 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:14:35.128 21:17:24 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:35.128 21:17:24 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:14:35.128 21:17:24 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:14:35.129 21:17:24 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:35.129 21:17:24 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:14:35.129 21:17:24 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:35.129 21:17:24 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:14:35.129 21:17:24 -- setup/devices.sh@50 -- # local mount_point= 00:14:35.129 21:17:24 -- setup/devices.sh@51 -- # local test_file= 00:14:35.129 21:17:24 -- setup/devices.sh@53 -- # local found=0 00:14:35.129 21:17:24 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:35.129 21:17:24 -- setup/devices.sh@59 -- # local pci status 00:14:35.129 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.129 21:17:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:35.129 21:17:24 -- setup/devices.sh@47 -- # setup output config 00:14:35.129 21:17:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:35.129 21:17:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:35.709 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.709 21:17:24 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:14:35.709 21:17:24 -- setup/devices.sh@63 -- # found=1 00:14:35.709 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.709 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.709 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.709 21:17:24 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.709 21:17:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.998 21:17:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.998 21:17:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:35.998 21:17:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:35.998 21:17:25 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:35.998 21:17:25 -- setup/devices.sh@68 -- # return 0 00:14:35.998 21:17:25 -- setup/devices.sh@128 -- # cleanup_nvme 00:14:35.998 21:17:25 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:35.998 21:17:25 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:35.998 21:17:25 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:35.998 21:17:25 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:35.998 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:35.998 00:14:35.998 real 0m4.629s 00:14:35.998 user 0m0.834s 00:14:35.998 sys 0m1.548s 00:14:35.998 21:17:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:35.998 21:17:25 -- common/autotest_common.sh@10 -- # set +x 00:14:35.998 ************************************ 00:14:35.998 END TEST nvme_mount 00:14:35.998 ************************************ 00:14:35.998 21:17:25 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:14:35.998 21:17:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:35.998 21:17:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.998 21:17:25 -- common/autotest_common.sh@10 -- # set +x 00:14:36.258 ************************************ 00:14:36.258 START TEST dm_mount 00:14:36.258 ************************************ 00:14:36.258 21:17:25 -- common/autotest_common.sh@1111 -- # dm_mount 00:14:36.258 21:17:25 -- setup/devices.sh@144 -- # pv=nvme0n1 00:14:36.258 21:17:25 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:14:36.258 21:17:25 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:14:36.258 21:17:25 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:14:36.258 21:17:25 -- setup/common.sh@39 -- # local disk=nvme0n1 00:14:36.258 21:17:25 -- setup/common.sh@40 -- # local part_no=2 00:14:36.258 21:17:25 -- setup/common.sh@41 -- # local size=1073741824 00:14:36.258 21:17:25 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:14:36.258 21:17:25 -- setup/common.sh@44 -- # parts=() 00:14:36.258 21:17:25 -- setup/common.sh@44 -- # local parts 00:14:36.258 21:17:25 -- setup/common.sh@46 -- # (( part = 1 )) 00:14:36.258 21:17:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:36.258 21:17:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:36.258 21:17:25 -- setup/common.sh@46 -- # (( part++ )) 00:14:36.258 21:17:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:36.258 21:17:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:14:36.258 21:17:25 -- setup/common.sh@46 -- # (( part++ )) 00:14:36.258 21:17:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:14:36.258 21:17:25 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:14:36.258 21:17:25 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:14:36.258 21:17:25 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:14:37.192 Creating new GPT entries in memory. 00:14:37.192 GPT data structures destroyed! You may now partition the disk using fdisk or 00:14:37.192 other utilities. 00:14:37.192 21:17:26 -- setup/common.sh@57 -- # (( part = 1 )) 00:14:37.192 21:17:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:37.192 21:17:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:37.192 21:17:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:37.192 21:17:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:14:38.128 Creating new GPT entries in memory. 00:14:38.128 The operation has completed successfully. 00:14:38.128 21:17:27 -- setup/common.sh@57 -- # (( part++ )) 00:14:38.128 21:17:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:38.128 21:17:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:14:38.128 21:17:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:14:38.128 21:17:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:14:39.504 The operation has completed successfully. 00:14:39.504 21:17:28 -- setup/common.sh@57 -- # (( part++ )) 00:14:39.504 21:17:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:14:39.504 21:17:28 -- setup/common.sh@62 -- # wait 71794 00:14:39.504 21:17:28 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:14:39.504 21:17:28 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:39.504 21:17:28 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:39.504 21:17:28 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:14:39.504 21:17:28 -- setup/devices.sh@160 -- # for t in {1..5} 00:14:39.504 21:17:28 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:39.504 21:17:28 -- setup/devices.sh@161 -- # break 00:14:39.504 21:17:28 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:39.504 21:17:28 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:14:39.504 21:17:28 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:14:39.504 21:17:28 -- setup/devices.sh@166 -- # dm=dm-0 00:14:39.504 21:17:28 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:14:39.504 21:17:28 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:14:39.504 21:17:28 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:39.504 21:17:28 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:14:39.504 21:17:28 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:39.504 21:17:28 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:14:39.504 21:17:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:14:39.504 21:17:28 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:39.504 21:17:28 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:39.504 21:17:28 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:39.504 21:17:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:14:39.504 21:17:28 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:39.504 21:17:28 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:39.504 21:17:28 -- setup/devices.sh@53 -- # local found=0 00:14:39.504 21:17:28 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:39.504 21:17:28 -- setup/devices.sh@56 -- # : 00:14:39.504 21:17:28 -- setup/devices.sh@59 -- # local pci status 00:14:39.504 21:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.504 21:17:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:39.504 21:17:28 -- setup/devices.sh@47 -- # setup output config 00:14:39.504 21:17:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:39.504 21:17:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:39.762 21:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.762 21:17:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:14:39.762 21:17:28 -- setup/devices.sh@63 -- # found=1 00:14:39.762 21:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.762 21:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.762 21:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:39.762 21:17:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:39.762 21:17:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.020 21:17:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.020 21:17:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.020 21:17:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:40.020 21:17:29 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:14:40.020 21:17:29 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:40.020 21:17:29 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:14:40.020 21:17:29 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:14:40.020 21:17:29 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:40.020 21:17:29 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:14:40.021 21:17:29 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:14:40.021 21:17:29 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:14:40.021 21:17:29 -- setup/devices.sh@50 -- # local mount_point= 00:14:40.021 21:17:29 -- setup/devices.sh@51 -- # local test_file= 00:14:40.021 21:17:29 -- setup/devices.sh@53 -- # local found=0 00:14:40.021 21:17:29 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:14:40.021 21:17:29 -- setup/devices.sh@59 -- # local pci status 00:14:40.021 21:17:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.021 21:17:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:14:40.021 21:17:29 -- setup/devices.sh@47 -- # setup output config 00:14:40.021 21:17:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:40.021 21:17:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:40.278 21:17:29 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.278 21:17:29 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:14:40.279 21:17:29 -- setup/devices.sh@63 -- # found=1 00:14:40.279 21:17:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.279 21:17:29 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.279 21:17:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.538 21:17:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.538 21:17:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.538 21:17:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:40.538 21:17:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:14:40.807 21:17:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:14:40.807 21:17:29 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:14:40.807 21:17:29 -- setup/devices.sh@68 -- # return 0 00:14:40.807 21:17:29 -- setup/devices.sh@187 -- # cleanup_dm 00:14:40.807 21:17:29 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:40.807 21:17:29 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:40.808 21:17:29 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:14:40.808 21:17:29 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:40.808 21:17:29 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:14:40.808 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:14:40.808 21:17:29 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:40.808 21:17:29 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:14:40.808 00:14:40.808 real 0m4.581s 00:14:40.808 user 0m0.579s 00:14:40.808 sys 0m0.970s 00:14:40.808 21:17:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:40.808 21:17:29 -- common/autotest_common.sh@10 -- # set +x 00:14:40.808 ************************************ 00:14:40.808 END TEST dm_mount 00:14:40.808 ************************************ 00:14:40.808 21:17:29 -- setup/devices.sh@1 -- # cleanup 00:14:40.808 21:17:29 -- setup/devices.sh@11 -- # cleanup_nvme 00:14:40.808 21:17:29 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:14:40.808 21:17:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:40.808 21:17:29 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:14:40.808 21:17:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:14:40.808 21:17:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:14:41.066 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:14:41.066 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:14:41.066 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:14:41.066 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:14:41.066 21:17:30 -- setup/devices.sh@12 -- # cleanup_dm 00:14:41.066 21:17:30 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:14:41.066 21:17:30 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:14:41.066 21:17:30 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:14:41.066 21:17:30 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:14:41.066 21:17:30 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:14:41.066 21:17:30 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:14:41.066 ************************************ 00:14:41.066 END TEST devices 00:14:41.066 ************************************ 00:14:41.066 00:14:41.066 real 0m11.194s 00:14:41.066 user 0m2.165s 00:14:41.066 sys 0m3.445s 00:14:41.066 21:17:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:41.066 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:14:41.066 ************************************ 00:14:41.066 END TEST setup.sh 00:14:41.066 ************************************ 00:14:41.066 00:14:41.066 real 0m26.800s 00:14:41.066 user 0m8.338s 00:14:41.066 sys 0m13.062s 00:14:41.066 21:17:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:41.066 21:17:30 -- common/autotest_common.sh@10 -- # set +x 00:14:41.066 21:17:30 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:42.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:42.003 Hugepages 00:14:42.003 node hugesize free / total 00:14:42.003 node0 1048576kB 0 / 0 00:14:42.003 node0 2048kB 2048 / 2048 00:14:42.003 00:14:42.003 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:42.003 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:42.262 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:14:42.262 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:14:42.262 21:17:31 -- spdk/autotest.sh@130 -- # uname -s 00:14:42.262 21:17:31 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:14:42.262 21:17:31 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:14:42.262 21:17:31 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:43.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:43.200 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:43.200 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:43.200 21:17:32 -- common/autotest_common.sh@1518 -- # sleep 1 00:14:44.140 21:17:33 -- common/autotest_common.sh@1519 -- # bdfs=() 00:14:44.140 21:17:33 -- common/autotest_common.sh@1519 -- # local bdfs 00:14:44.140 21:17:33 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:14:44.399 21:17:33 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:14:44.399 21:17:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:14:44.399 21:17:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:14:44.399 21:17:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:44.399 21:17:33 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:44.399 21:17:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:14:44.399 21:17:33 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:14:44.399 21:17:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:44.399 21:17:33 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:44.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:44.967 Waiting for block devices as requested 00:14:44.967 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:44.967 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:44.967 21:17:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:44.967 21:17:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:14:44.967 21:17:34 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:44.967 21:17:34 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:14:44.967 21:17:34 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:44.967 21:17:34 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:14:44.967 21:17:34 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:44.967 21:17:34 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:14:44.967 21:17:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:14:44.967 21:17:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:14:44.967 21:17:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:14:44.967 21:17:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:44.967 21:17:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:44.967 21:17:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:44.967 21:17:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:44.967 21:17:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:44.967 21:17:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:14:44.967 21:17:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:44.967 21:17:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:45.292 21:17:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:45.292 21:17:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:45.292 21:17:34 -- common/autotest_common.sh@1543 -- # continue 00:14:45.292 21:17:34 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:45.292 21:17:34 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:14:45.292 21:17:34 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:45.292 21:17:34 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:14:45.292 21:17:34 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:45.292 21:17:34 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:14:45.292 21:17:34 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:45.292 21:17:34 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:14:45.292 21:17:34 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:14:45.292 21:17:34 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:14:45.292 21:17:34 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:45.292 21:17:34 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:45.292 21:17:34 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:14:45.292 21:17:34 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:45.292 21:17:34 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:45.292 21:17:34 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:45.292 21:17:34 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:14:45.292 21:17:34 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:45.292 21:17:34 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:45.292 21:17:34 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:45.292 21:17:34 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:45.292 21:17:34 -- common/autotest_common.sh@1543 -- # continue 00:14:45.292 21:17:34 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:14:45.292 21:17:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:45.292 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:14:45.292 21:17:34 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:14:45.292 21:17:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:45.292 21:17:34 -- common/autotest_common.sh@10 -- # set +x 00:14:45.292 21:17:34 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:45.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:46.118 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:46.118 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:46.118 21:17:35 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:14:46.118 21:17:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:46.118 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.377 21:17:35 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:14:46.377 21:17:35 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:14:46.377 21:17:35 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:14:46.377 21:17:35 -- common/autotest_common.sh@1563 -- # bdfs=() 00:14:46.377 21:17:35 -- common/autotest_common.sh@1563 -- # local bdfs 00:14:46.377 21:17:35 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:14:46.377 21:17:35 -- common/autotest_common.sh@1499 -- # bdfs=() 00:14:46.377 21:17:35 -- common/autotest_common.sh@1499 -- # local bdfs 00:14:46.377 21:17:35 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:46.377 21:17:35 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:46.377 21:17:35 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:14:46.377 21:17:35 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:14:46.377 21:17:35 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:46.377 21:17:35 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:14:46.377 21:17:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:46.377 21:17:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:46.377 21:17:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:46.377 21:17:35 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:14:46.377 21:17:35 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:14:46.377 21:17:35 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:46.377 21:17:35 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:46.377 21:17:35 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:14:46.377 21:17:35 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:14:46.377 21:17:35 -- common/autotest_common.sh@1579 -- # return 0 00:14:46.377 21:17:35 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:14:46.377 21:17:35 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:14:46.377 21:17:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:46.377 21:17:35 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:14:46.377 21:17:35 -- spdk/autotest.sh@162 -- # timing_enter lib 00:14:46.377 21:17:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:46.377 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.377 21:17:35 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:46.377 21:17:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:46.377 21:17:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.377 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.377 ************************************ 00:14:46.377 START TEST env 00:14:46.377 ************************************ 00:14:46.377 21:17:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:46.638 * Looking for test storage... 00:14:46.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:14:46.638 21:17:35 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:46.638 21:17:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:46.638 21:17:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.638 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.638 ************************************ 00:14:46.638 START TEST env_memory 00:14:46.638 ************************************ 00:14:46.638 21:17:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:46.638 00:14:46.638 00:14:46.638 CUnit - A unit testing framework for C - Version 2.1-3 00:14:46.638 http://cunit.sourceforge.net/ 00:14:46.638 00:14:46.638 00:14:46.638 Suite: memory 00:14:46.638 Test: alloc and free memory map ...[2024-04-26 21:17:35.851099] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:14:46.638 passed 00:14:46.638 Test: mem map translation ...[2024-04-26 21:17:35.875424] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:14:46.638 [2024-04-26 21:17:35.875528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:14:46.638 [2024-04-26 21:17:35.875623] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:14:46.638 [2024-04-26 21:17:35.875680] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:14:46.901 passed 00:14:46.901 Test: mem map registration ...[2024-04-26 21:17:35.921943] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:14:46.901 [2024-04-26 21:17:35.922065] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:14:46.901 passed 00:14:46.901 Test: mem map adjacent registrations ...passed 00:14:46.901 00:14:46.901 Run Summary: Type Total Ran Passed Failed Inactive 00:14:46.901 suites 1 1 n/a 0 0 00:14:46.901 tests 4 4 4 0 0 00:14:46.901 asserts 152 152 152 0 n/a 00:14:46.901 00:14:46.901 Elapsed time = 0.162 seconds 00:14:46.901 00:14:46.901 real 0m0.180s 00:14:46.901 user 0m0.167s 00:14:46.901 sys 0m0.012s 00:14:46.901 21:17:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:46.901 21:17:35 -- common/autotest_common.sh@10 -- # set +x 00:14:46.901 ************************************ 00:14:46.901 END TEST env_memory 00:14:46.901 ************************************ 00:14:46.901 21:17:36 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:46.901 21:17:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:46.901 21:17:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.901 21:17:36 -- common/autotest_common.sh@10 -- # set +x 00:14:46.901 ************************************ 00:14:46.901 START TEST env_vtophys 00:14:46.901 ************************************ 00:14:46.901 21:17:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:47.167 EAL: lib.eal log level changed from notice to debug 00:14:47.167 EAL: Detected lcore 0 as core 0 on socket 0 00:14:47.167 EAL: Detected lcore 1 as core 0 on socket 0 00:14:47.167 EAL: Detected lcore 2 as core 0 on socket 0 00:14:47.167 EAL: Detected lcore 3 as core 0 on socket 0 00:14:47.167 EAL: Detected lcore 4 as core 0 on socket 0 00:14:47.167 EAL: Detected lcore 5 as core 0 on socket 0 00:14:47.167 EAL: Detected lcore 6 as core 0 on socket 0 00:14:47.167 EAL: Detected lcore 7 as core 0 on socket 0 00:14:47.167 EAL: Detected lcore 8 as core 0 on socket 0 00:14:47.167 EAL: Detected lcore 9 as core 0 on socket 0 00:14:47.167 EAL: Maximum logical cores by configuration: 128 00:14:47.167 EAL: Detected CPU lcores: 10 00:14:47.167 EAL: Detected NUMA nodes: 1 00:14:47.167 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:14:47.167 EAL: Detected shared linkage of DPDK 00:14:47.167 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:14:47.167 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:14:47.167 EAL: Registered [vdev] bus. 00:14:47.167 EAL: bus.vdev log level changed from disabled to notice 00:14:47.167 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:14:47.167 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:14:47.167 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:14:47.167 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:14:47.167 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:14:47.167 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:14:47.167 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:14:47.167 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:14:47.167 EAL: No shared files mode enabled, IPC will be disabled 00:14:47.167 EAL: No shared files mode enabled, IPC is disabled 00:14:47.167 EAL: Selected IOVA mode 'PA' 00:14:47.167 EAL: Probing VFIO support... 00:14:47.167 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:47.167 EAL: VFIO modules not loaded, skipping VFIO support... 00:14:47.167 EAL: Ask a virtual area of 0x2e000 bytes 00:14:47.167 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:14:47.167 EAL: Setting up physically contiguous memory... 00:14:47.167 EAL: Setting maximum number of open files to 524288 00:14:47.167 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:14:47.167 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:14:47.167 EAL: Ask a virtual area of 0x61000 bytes 00:14:47.167 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:14:47.167 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:47.167 EAL: Ask a virtual area of 0x400000000 bytes 00:14:47.167 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:14:47.167 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:14:47.167 EAL: Ask a virtual area of 0x61000 bytes 00:14:47.167 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:14:47.167 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:47.167 EAL: Ask a virtual area of 0x400000000 bytes 00:14:47.167 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:14:47.167 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:14:47.167 EAL: Ask a virtual area of 0x61000 bytes 00:14:47.167 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:14:47.167 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:47.167 EAL: Ask a virtual area of 0x400000000 bytes 00:14:47.167 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:14:47.167 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:14:47.167 EAL: Ask a virtual area of 0x61000 bytes 00:14:47.167 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:14:47.167 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:47.167 EAL: Ask a virtual area of 0x400000000 bytes 00:14:47.167 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:14:47.167 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:14:47.167 EAL: Hugepages will be freed exactly as allocated. 00:14:47.167 EAL: No shared files mode enabled, IPC is disabled 00:14:47.167 EAL: No shared files mode enabled, IPC is disabled 00:14:47.167 EAL: TSC frequency is ~2290000 KHz 00:14:47.167 EAL: Main lcore 0 is ready (tid=7f75b4089a00;cpuset=[0]) 00:14:47.167 EAL: Trying to obtain current memory policy. 00:14:47.167 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.167 EAL: Restoring previous memory policy: 0 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was expanded by 2MB 00:14:47.168 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: No PCI address specified using 'addr=' in: bus=pci 00:14:47.168 EAL: Mem event callback 'spdk:(nil)' registered 00:14:47.168 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:14:47.168 00:14:47.168 00:14:47.168 CUnit - A unit testing framework for C - Version 2.1-3 00:14:47.168 http://cunit.sourceforge.net/ 00:14:47.168 00:14:47.168 00:14:47.168 Suite: components_suite 00:14:47.168 Test: vtophys_malloc_test ...passed 00:14:47.168 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:14:47.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.168 EAL: Restoring previous memory policy: 4 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was expanded by 4MB 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was shrunk by 4MB 00:14:47.168 EAL: Trying to obtain current memory policy. 00:14:47.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.168 EAL: Restoring previous memory policy: 4 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was expanded by 6MB 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was shrunk by 6MB 00:14:47.168 EAL: Trying to obtain current memory policy. 00:14:47.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.168 EAL: Restoring previous memory policy: 4 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was expanded by 10MB 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was shrunk by 10MB 00:14:47.168 EAL: Trying to obtain current memory policy. 00:14:47.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.168 EAL: Restoring previous memory policy: 4 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was expanded by 18MB 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was shrunk by 18MB 00:14:47.168 EAL: Trying to obtain current memory policy. 00:14:47.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.168 EAL: Restoring previous memory policy: 4 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was expanded by 34MB 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was shrunk by 34MB 00:14:47.168 EAL: Trying to obtain current memory policy. 00:14:47.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.168 EAL: Restoring previous memory policy: 4 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was expanded by 66MB 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was shrunk by 66MB 00:14:47.168 EAL: Trying to obtain current memory policy. 00:14:47.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.168 EAL: Restoring previous memory policy: 4 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.168 EAL: request: mp_malloc_sync 00:14:47.168 EAL: No shared files mode enabled, IPC is disabled 00:14:47.168 EAL: Heap on socket 0 was expanded by 130MB 00:14:47.168 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.435 EAL: request: mp_malloc_sync 00:14:47.435 EAL: No shared files mode enabled, IPC is disabled 00:14:47.435 EAL: Heap on socket 0 was shrunk by 130MB 00:14:47.435 EAL: Trying to obtain current memory policy. 00:14:47.435 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.435 EAL: Restoring previous memory policy: 4 00:14:47.435 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.435 EAL: request: mp_malloc_sync 00:14:47.435 EAL: No shared files mode enabled, IPC is disabled 00:14:47.435 EAL: Heap on socket 0 was expanded by 258MB 00:14:47.435 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.435 EAL: request: mp_malloc_sync 00:14:47.435 EAL: No shared files mode enabled, IPC is disabled 00:14:47.435 EAL: Heap on socket 0 was shrunk by 258MB 00:14:47.435 EAL: Trying to obtain current memory policy. 00:14:47.435 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.435 EAL: Restoring previous memory policy: 4 00:14:47.435 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.435 EAL: request: mp_malloc_sync 00:14:47.435 EAL: No shared files mode enabled, IPC is disabled 00:14:47.435 EAL: Heap on socket 0 was expanded by 514MB 00:14:47.704 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.704 EAL: request: mp_malloc_sync 00:14:47.704 EAL: No shared files mode enabled, IPC is disabled 00:14:47.704 EAL: Heap on socket 0 was shrunk by 514MB 00:14:47.704 EAL: Trying to obtain current memory policy. 00:14:47.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:47.975 EAL: Restoring previous memory policy: 4 00:14:47.975 EAL: Calling mem event callback 'spdk:(nil)' 00:14:47.975 EAL: request: mp_malloc_sync 00:14:47.975 EAL: No shared files mode enabled, IPC is disabled 00:14:47.975 EAL: Heap on socket 0 was expanded by 1026MB 00:14:47.975 EAL: Calling mem event callback 'spdk:(nil)' 00:14:48.249 passed 00:14:48.249 00:14:48.249 Run Summary: Type Total Ran Passed Failed Inactive 00:14:48.249 suites 1 1 n/a 0 0 00:14:48.249 tests 2 2 2 0 0 00:14:48.249 asserts 5232 5232 5232 0 n/a 00:14:48.249 00:14:48.249 Elapsed time = 1.035 seconds 00:14:48.249 EAL: request: mp_malloc_sync 00:14:48.249 EAL: No shared files mode enabled, IPC is disabled 00:14:48.249 EAL: Heap on socket 0 was shrunk by 1026MB 00:14:48.249 EAL: Calling mem event callback 'spdk:(nil)' 00:14:48.249 EAL: request: mp_malloc_sync 00:14:48.249 EAL: No shared files mode enabled, IPC is disabled 00:14:48.249 EAL: Heap on socket 0 was shrunk by 2MB 00:14:48.249 EAL: No shared files mode enabled, IPC is disabled 00:14:48.249 EAL: No shared files mode enabled, IPC is disabled 00:14:48.249 EAL: No shared files mode enabled, IPC is disabled 00:14:48.249 00:14:48.249 real 0m1.240s 00:14:48.249 user 0m0.667s 00:14:48.249 sys 0m0.442s 00:14:48.249 21:17:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:48.249 ************************************ 00:14:48.249 END TEST env_vtophys 00:14:48.249 ************************************ 00:14:48.249 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 21:17:37 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:48.249 21:17:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:48.249 21:17:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.249 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:14:48.249 ************************************ 00:14:48.249 START TEST env_pci 00:14:48.249 ************************************ 00:14:48.249 21:17:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:14:48.523 00:14:48.523 00:14:48.523 CUnit - A unit testing framework for C - Version 2.1-3 00:14:48.523 http://cunit.sourceforge.net/ 00:14:48.523 00:14:48.523 00:14:48.523 Suite: pci 00:14:48.523 Test: pci_hook ...[2024-04-26 21:17:37.506322] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73017 has claimed it 00:14:48.523 EAL: Cannot find device (10000:00:01.0) 00:14:48.523 passed 00:14:48.523 00:14:48.523 Run Summary: Type Total Ran Passed Failed Inactive 00:14:48.523 suites 1 1 n/a 0 0 00:14:48.523 tests 1 1 1 0 0 00:14:48.523 asserts 25 25 25 0 n/a 00:14:48.523 00:14:48.523 Elapsed time = 0.002 seconds 00:14:48.523 EAL: Failed to attach device on primary process 00:14:48.523 ************************************ 00:14:48.523 END TEST env_pci 00:14:48.523 ************************************ 00:14:48.523 00:14:48.523 real 0m0.016s 00:14:48.523 user 0m0.004s 00:14:48.523 sys 0m0.012s 00:14:48.523 21:17:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:48.523 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:14:48.523 21:17:37 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:14:48.523 21:17:37 -- env/env.sh@15 -- # uname 00:14:48.523 21:17:37 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:14:48.523 21:17:37 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:14:48.523 21:17:37 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:48.523 21:17:37 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:48.523 21:17:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.523 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:14:48.523 ************************************ 00:14:48.523 START TEST env_dpdk_post_init 00:14:48.523 ************************************ 00:14:48.523 21:17:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:14:48.523 EAL: Detected CPU lcores: 10 00:14:48.523 EAL: Detected NUMA nodes: 1 00:14:48.524 EAL: Detected shared linkage of DPDK 00:14:48.524 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:48.524 EAL: Selected IOVA mode 'PA' 00:14:48.524 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:48.786 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:14:48.786 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:14:48.787 Starting DPDK initialization... 00:14:48.787 Starting SPDK post initialization... 00:14:48.787 SPDK NVMe probe 00:14:48.787 Attaching to 0000:00:10.0 00:14:48.787 Attaching to 0000:00:11.0 00:14:48.787 Attached to 0000:00:10.0 00:14:48.787 Attached to 0000:00:11.0 00:14:48.787 Cleaning up... 00:14:48.787 00:14:48.787 real 0m0.187s 00:14:48.787 user 0m0.050s 00:14:48.787 sys 0m0.038s 00:14:48.787 ************************************ 00:14:48.787 END TEST env_dpdk_post_init 00:14:48.787 ************************************ 00:14:48.787 21:17:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:48.787 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:14:48.787 21:17:37 -- env/env.sh@26 -- # uname 00:14:48.787 21:17:37 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:14:48.787 21:17:37 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:48.787 21:17:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:48.787 21:17:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.787 21:17:37 -- common/autotest_common.sh@10 -- # set +x 00:14:48.787 ************************************ 00:14:48.787 START TEST env_mem_callbacks 00:14:48.787 ************************************ 00:14:48.787 21:17:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:14:48.787 EAL: Detected CPU lcores: 10 00:14:48.787 EAL: Detected NUMA nodes: 1 00:14:48.787 EAL: Detected shared linkage of DPDK 00:14:48.787 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:14:48.787 EAL: Selected IOVA mode 'PA' 00:14:49.046 00:14:49.046 00:14:49.046 CUnit - A unit testing framework for C - Version 2.1-3 00:14:49.046 http://cunit.sourceforge.net/ 00:14:49.046 00:14:49.046 00:14:49.046 Suite: memory 00:14:49.046 Test: test ... 00:14:49.046 register 0x200000200000 2097152 00:14:49.046 malloc 3145728 00:14:49.046 TELEMETRY: No legacy callbacks, legacy socket not created 00:14:49.046 register 0x200000400000 4194304 00:14:49.046 buf 0x200000500000 len 3145728 PASSED 00:14:49.046 malloc 64 00:14:49.046 buf 0x2000004fff40 len 64 PASSED 00:14:49.046 malloc 4194304 00:14:49.046 register 0x200000800000 6291456 00:14:49.046 buf 0x200000a00000 len 4194304 PASSED 00:14:49.046 free 0x200000500000 3145728 00:14:49.046 free 0x2000004fff40 64 00:14:49.046 unregister 0x200000400000 4194304 PASSED 00:14:49.046 free 0x200000a00000 4194304 00:14:49.046 unregister 0x200000800000 6291456 PASSED 00:14:49.046 malloc 8388608 00:14:49.046 register 0x200000400000 10485760 00:14:49.046 buf 0x200000600000 len 8388608 PASSED 00:14:49.046 free 0x200000600000 8388608 00:14:49.046 unregister 0x200000400000 10485760 PASSED 00:14:49.046 passed 00:14:49.046 00:14:49.046 Run Summary: Type Total Ran Passed Failed Inactive 00:14:49.046 suites 1 1 n/a 0 0 00:14:49.046 tests 1 1 1 0 0 00:14:49.046 asserts 15 15 15 0 n/a 00:14:49.046 00:14:49.046 Elapsed time = 0.009 seconds 00:14:49.046 ************************************ 00:14:49.046 END TEST env_mem_callbacks 00:14:49.046 ************************************ 00:14:49.046 00:14:49.046 real 0m0.148s 00:14:49.046 user 0m0.017s 00:14:49.046 sys 0m0.029s 00:14:49.046 21:17:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:49.046 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:14:49.046 ************************************ 00:14:49.046 END TEST env 00:14:49.046 ************************************ 00:14:49.046 00:14:49.046 real 0m2.585s 00:14:49.046 user 0m1.192s 00:14:49.046 sys 0m1.003s 00:14:49.046 21:17:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:49.046 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:14:49.046 21:17:38 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:49.046 21:17:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:49.046 21:17:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:49.046 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:14:49.305 ************************************ 00:14:49.305 START TEST rpc 00:14:49.305 ************************************ 00:14:49.305 21:17:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:14:49.305 * Looking for test storage... 00:14:49.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:49.305 21:17:38 -- rpc/rpc.sh@65 -- # spdk_pid=73146 00:14:49.305 21:17:38 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:14:49.305 21:17:38 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:49.305 21:17:38 -- rpc/rpc.sh@67 -- # waitforlisten 73146 00:14:49.305 21:17:38 -- common/autotest_common.sh@817 -- # '[' -z 73146 ']' 00:14:49.305 21:17:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.305 21:17:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:49.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.305 21:17:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.305 21:17:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:49.305 21:17:38 -- common/autotest_common.sh@10 -- # set +x 00:14:49.305 [2024-04-26 21:17:38.498220] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:49.305 [2024-04-26 21:17:38.498399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73146 ] 00:14:49.563 [2024-04-26 21:17:38.638400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.563 [2024-04-26 21:17:38.690500] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:14:49.563 [2024-04-26 21:17:38.690631] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73146' to capture a snapshot of events at runtime. 00:14:49.563 [2024-04-26 21:17:38.690680] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.563 [2024-04-26 21:17:38.690717] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.563 [2024-04-26 21:17:38.690766] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73146 for offline analysis/debug. 00:14:49.563 [2024-04-26 21:17:38.690833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.129 21:17:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:50.129 21:17:39 -- common/autotest_common.sh@850 -- # return 0 00:14:50.129 21:17:39 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:50.129 21:17:39 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:14:50.129 21:17:39 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:14:50.129 21:17:39 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:14:50.129 21:17:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:50.129 21:17:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.129 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.387 ************************************ 00:14:50.387 START TEST rpc_integrity 00:14:50.387 ************************************ 00:14:50.387 21:17:39 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:14:50.387 21:17:39 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:50.387 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.387 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.387 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.387 21:17:39 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:50.387 21:17:39 -- rpc/rpc.sh@13 -- # jq length 00:14:50.387 21:17:39 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:50.387 21:17:39 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:50.387 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.387 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.387 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.387 21:17:39 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:14:50.387 21:17:39 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:50.387 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.387 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.387 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.387 21:17:39 -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:50.387 { 00:14:50.387 "aliases": [ 00:14:50.387 "37807099-cf25-4568-a116-7a66870e6d97" 00:14:50.387 ], 00:14:50.387 "assigned_rate_limits": { 00:14:50.387 "r_mbytes_per_sec": 0, 00:14:50.387 "rw_ios_per_sec": 0, 00:14:50.387 "rw_mbytes_per_sec": 0, 00:14:50.387 "w_mbytes_per_sec": 0 00:14:50.387 }, 00:14:50.387 "block_size": 512, 00:14:50.387 "claimed": false, 00:14:50.387 "driver_specific": {}, 00:14:50.387 "memory_domains": [ 00:14:50.387 { 00:14:50.387 "dma_device_id": "system", 00:14:50.387 "dma_device_type": 1 00:14:50.387 }, 00:14:50.387 { 00:14:50.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.387 "dma_device_type": 2 00:14:50.387 } 00:14:50.387 ], 00:14:50.387 "name": "Malloc0", 00:14:50.387 "num_blocks": 16384, 00:14:50.388 "product_name": "Malloc disk", 00:14:50.388 "supported_io_types": { 00:14:50.388 "abort": true, 00:14:50.388 "compare": false, 00:14:50.388 "compare_and_write": false, 00:14:50.388 "flush": true, 00:14:50.388 "nvme_admin": false, 00:14:50.388 "nvme_io": false, 00:14:50.388 "read": true, 00:14:50.388 "reset": true, 00:14:50.388 "unmap": true, 00:14:50.388 "write": true, 00:14:50.388 "write_zeroes": true 00:14:50.388 }, 00:14:50.388 "uuid": "37807099-cf25-4568-a116-7a66870e6d97", 00:14:50.388 "zoned": false 00:14:50.388 } 00:14:50.388 ]' 00:14:50.388 21:17:39 -- rpc/rpc.sh@17 -- # jq length 00:14:50.388 21:17:39 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:50.388 21:17:39 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:14:50.388 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.388 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.388 [2024-04-26 21:17:39.614905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:14:50.388 [2024-04-26 21:17:39.614952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.388 [2024-04-26 21:17:39.614968] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136e390 00:14:50.388 [2024-04-26 21:17:39.614975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.388 [2024-04-26 21:17:39.616524] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.388 [2024-04-26 21:17:39.616557] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:50.388 Passthru0 00:14:50.388 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.388 21:17:39 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:50.388 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.388 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.666 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.666 21:17:39 -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:50.666 { 00:14:50.666 "aliases": [ 00:14:50.666 "37807099-cf25-4568-a116-7a66870e6d97" 00:14:50.666 ], 00:14:50.666 "assigned_rate_limits": { 00:14:50.666 "r_mbytes_per_sec": 0, 00:14:50.666 "rw_ios_per_sec": 0, 00:14:50.666 "rw_mbytes_per_sec": 0, 00:14:50.666 "w_mbytes_per_sec": 0 00:14:50.666 }, 00:14:50.666 "block_size": 512, 00:14:50.666 "claim_type": "exclusive_write", 00:14:50.666 "claimed": true, 00:14:50.666 "driver_specific": {}, 00:14:50.666 "memory_domains": [ 00:14:50.666 { 00:14:50.666 "dma_device_id": "system", 00:14:50.666 "dma_device_type": 1 00:14:50.666 }, 00:14:50.666 { 00:14:50.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.666 "dma_device_type": 2 00:14:50.666 } 00:14:50.666 ], 00:14:50.666 "name": "Malloc0", 00:14:50.666 "num_blocks": 16384, 00:14:50.666 "product_name": "Malloc disk", 00:14:50.666 "supported_io_types": { 00:14:50.666 "abort": true, 00:14:50.666 "compare": false, 00:14:50.666 "compare_and_write": false, 00:14:50.666 "flush": true, 00:14:50.666 "nvme_admin": false, 00:14:50.666 "nvme_io": false, 00:14:50.666 "read": true, 00:14:50.666 "reset": true, 00:14:50.666 "unmap": true, 00:14:50.666 "write": true, 00:14:50.666 "write_zeroes": true 00:14:50.666 }, 00:14:50.666 "uuid": "37807099-cf25-4568-a116-7a66870e6d97", 00:14:50.666 "zoned": false 00:14:50.666 }, 00:14:50.666 { 00:14:50.666 "aliases": [ 00:14:50.666 "cb92ce60-f612-5619-90a3-e48492ea1740" 00:14:50.666 ], 00:14:50.666 "assigned_rate_limits": { 00:14:50.666 "r_mbytes_per_sec": 0, 00:14:50.666 "rw_ios_per_sec": 0, 00:14:50.666 "rw_mbytes_per_sec": 0, 00:14:50.666 "w_mbytes_per_sec": 0 00:14:50.666 }, 00:14:50.666 "block_size": 512, 00:14:50.666 "claimed": false, 00:14:50.666 "driver_specific": { 00:14:50.666 "passthru": { 00:14:50.666 "base_bdev_name": "Malloc0", 00:14:50.666 "name": "Passthru0" 00:14:50.666 } 00:14:50.666 }, 00:14:50.666 "memory_domains": [ 00:14:50.666 { 00:14:50.666 "dma_device_id": "system", 00:14:50.666 "dma_device_type": 1 00:14:50.666 }, 00:14:50.666 { 00:14:50.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.666 "dma_device_type": 2 00:14:50.666 } 00:14:50.666 ], 00:14:50.666 "name": "Passthru0", 00:14:50.666 "num_blocks": 16384, 00:14:50.666 "product_name": "passthru", 00:14:50.666 "supported_io_types": { 00:14:50.666 "abort": true, 00:14:50.666 "compare": false, 00:14:50.666 "compare_and_write": false, 00:14:50.666 "flush": true, 00:14:50.666 "nvme_admin": false, 00:14:50.666 "nvme_io": false, 00:14:50.666 "read": true, 00:14:50.666 "reset": true, 00:14:50.666 "unmap": true, 00:14:50.666 "write": true, 00:14:50.666 "write_zeroes": true 00:14:50.666 }, 00:14:50.666 "uuid": "cb92ce60-f612-5619-90a3-e48492ea1740", 00:14:50.666 "zoned": false 00:14:50.666 } 00:14:50.666 ]' 00:14:50.666 21:17:39 -- rpc/rpc.sh@21 -- # jq length 00:14:50.666 21:17:39 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:50.666 21:17:39 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:50.666 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.666 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.666 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.666 21:17:39 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:50.666 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.666 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.666 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.666 21:17:39 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:50.666 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.666 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.666 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.666 21:17:39 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:50.666 21:17:39 -- rpc/rpc.sh@26 -- # jq length 00:14:50.666 ************************************ 00:14:50.666 END TEST rpc_integrity 00:14:50.666 ************************************ 00:14:50.666 21:17:39 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:50.666 00:14:50.666 real 0m0.329s 00:14:50.666 user 0m0.195s 00:14:50.666 sys 0m0.049s 00:14:50.666 21:17:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:50.666 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.666 21:17:39 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:14:50.666 21:17:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:50.666 21:17:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.666 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.925 ************************************ 00:14:50.925 START TEST rpc_plugins 00:14:50.925 ************************************ 00:14:50.925 21:17:39 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:14:50.925 21:17:39 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:14:50.925 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.925 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.925 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.925 21:17:39 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:14:50.925 21:17:39 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:14:50.925 21:17:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.925 21:17:39 -- common/autotest_common.sh@10 -- # set +x 00:14:50.925 21:17:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.925 21:17:39 -- rpc/rpc.sh@31 -- # bdevs='[ 00:14:50.925 { 00:14:50.925 "aliases": [ 00:14:50.925 "f1dc7880-05d7-4bfd-ac59-a7627951f4a3" 00:14:50.925 ], 00:14:50.925 "assigned_rate_limits": { 00:14:50.925 "r_mbytes_per_sec": 0, 00:14:50.925 "rw_ios_per_sec": 0, 00:14:50.925 "rw_mbytes_per_sec": 0, 00:14:50.925 "w_mbytes_per_sec": 0 00:14:50.925 }, 00:14:50.925 "block_size": 4096, 00:14:50.925 "claimed": false, 00:14:50.925 "driver_specific": {}, 00:14:50.925 "memory_domains": [ 00:14:50.925 { 00:14:50.925 "dma_device_id": "system", 00:14:50.925 "dma_device_type": 1 00:14:50.925 }, 00:14:50.925 { 00:14:50.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.926 "dma_device_type": 2 00:14:50.926 } 00:14:50.926 ], 00:14:50.926 "name": "Malloc1", 00:14:50.926 "num_blocks": 256, 00:14:50.926 "product_name": "Malloc disk", 00:14:50.926 "supported_io_types": { 00:14:50.926 "abort": true, 00:14:50.926 "compare": false, 00:14:50.926 "compare_and_write": false, 00:14:50.926 "flush": true, 00:14:50.926 "nvme_admin": false, 00:14:50.926 "nvme_io": false, 00:14:50.926 "read": true, 00:14:50.926 "reset": true, 00:14:50.926 "unmap": true, 00:14:50.926 "write": true, 00:14:50.926 "write_zeroes": true 00:14:50.926 }, 00:14:50.926 "uuid": "f1dc7880-05d7-4bfd-ac59-a7627951f4a3", 00:14:50.926 "zoned": false 00:14:50.926 } 00:14:50.926 ]' 00:14:50.926 21:17:39 -- rpc/rpc.sh@32 -- # jq length 00:14:50.926 21:17:40 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:14:50.926 21:17:40 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:14:50.926 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.926 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:50.926 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.926 21:17:40 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:14:50.926 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:50.926 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:50.926 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:50.926 21:17:40 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:14:50.926 21:17:40 -- rpc/rpc.sh@36 -- # jq length 00:14:50.926 ************************************ 00:14:50.926 END TEST rpc_plugins 00:14:50.926 ************************************ 00:14:50.926 21:17:40 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:14:50.926 00:14:50.926 real 0m0.169s 00:14:50.926 user 0m0.102s 00:14:50.926 sys 0m0.023s 00:14:50.926 21:17:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:50.926 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:50.926 21:17:40 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:14:50.926 21:17:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:50.926 21:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.926 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:51.184 ************************************ 00:14:51.184 START TEST rpc_trace_cmd_test 00:14:51.184 ************************************ 00:14:51.184 21:17:40 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:14:51.184 21:17:40 -- rpc/rpc.sh@40 -- # local info 00:14:51.184 21:17:40 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:14:51.184 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.184 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:51.184 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.184 21:17:40 -- rpc/rpc.sh@42 -- # info='{ 00:14:51.184 "bdev": { 00:14:51.184 "mask": "0x8", 00:14:51.184 "tpoint_mask": "0xffffffffffffffff" 00:14:51.184 }, 00:14:51.184 "bdev_nvme": { 00:14:51.184 "mask": "0x4000", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.184 }, 00:14:51.184 "blobfs": { 00:14:51.184 "mask": "0x80", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.184 }, 00:14:51.184 "dsa": { 00:14:51.184 "mask": "0x200", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.184 }, 00:14:51.184 "ftl": { 00:14:51.184 "mask": "0x40", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.184 }, 00:14:51.184 "iaa": { 00:14:51.184 "mask": "0x1000", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.184 }, 00:14:51.184 "iscsi_conn": { 00:14:51.184 "mask": "0x2", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.184 }, 00:14:51.184 "nvme_pcie": { 00:14:51.184 "mask": "0x800", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.184 }, 00:14:51.184 "nvme_tcp": { 00:14:51.184 "mask": "0x2000", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.184 }, 00:14:51.184 "nvmf_rdma": { 00:14:51.184 "mask": "0x10", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.184 }, 00:14:51.184 "nvmf_tcp": { 00:14:51.184 "mask": "0x20", 00:14:51.184 "tpoint_mask": "0x0" 00:14:51.185 }, 00:14:51.185 "scsi": { 00:14:51.185 "mask": "0x4", 00:14:51.185 "tpoint_mask": "0x0" 00:14:51.185 }, 00:14:51.185 "sock": { 00:14:51.185 "mask": "0x8000", 00:14:51.185 "tpoint_mask": "0x0" 00:14:51.185 }, 00:14:51.185 "thread": { 00:14:51.185 "mask": "0x400", 00:14:51.185 "tpoint_mask": "0x0" 00:14:51.185 }, 00:14:51.185 "tpoint_group_mask": "0x8", 00:14:51.185 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73146" 00:14:51.185 }' 00:14:51.185 21:17:40 -- rpc/rpc.sh@43 -- # jq length 00:14:51.185 21:17:40 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:14:51.185 21:17:40 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:14:51.185 21:17:40 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:14:51.185 21:17:40 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:14:51.185 21:17:40 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:14:51.185 21:17:40 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:14:51.444 21:17:40 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:14:51.444 21:17:40 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:14:51.444 ************************************ 00:14:51.444 END TEST rpc_trace_cmd_test 00:14:51.444 ************************************ 00:14:51.444 21:17:40 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:14:51.444 00:14:51.444 real 0m0.263s 00:14:51.444 user 0m0.211s 00:14:51.444 sys 0m0.040s 00:14:51.444 21:17:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:51.444 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:51.444 21:17:40 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:14:51.444 21:17:40 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:14:51.444 21:17:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:51.444 21:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.444 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:51.444 ************************************ 00:14:51.444 START TEST go_rpc 00:14:51.444 ************************************ 00:14:51.444 21:17:40 -- common/autotest_common.sh@1111 -- # go_rpc 00:14:51.444 21:17:40 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:14:51.444 21:17:40 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:14:51.444 21:17:40 -- rpc/rpc.sh@52 -- # jq length 00:14:51.704 21:17:40 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:14:51.704 21:17:40 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:14:51.704 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.704 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:51.704 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.704 21:17:40 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:14:51.704 21:17:40 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:14:51.704 21:17:40 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["82da8642-375d-4e79-8164-90bd90a4bbfb"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"82da8642-375d-4e79-8164-90bd90a4bbfb","zoned":false}]' 00:14:51.704 21:17:40 -- rpc/rpc.sh@57 -- # jq length 00:14:51.704 21:17:40 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:14:51.704 21:17:40 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:51.704 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.704 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:51.704 21:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.704 21:17:40 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:14:51.704 21:17:40 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:14:51.704 21:17:40 -- rpc/rpc.sh@61 -- # jq length 00:14:51.704 ************************************ 00:14:51.704 END TEST go_rpc 00:14:51.704 ************************************ 00:14:51.704 21:17:40 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:14:51.704 00:14:51.704 real 0m0.226s 00:14:51.704 user 0m0.142s 00:14:51.704 sys 0m0.050s 00:14:51.704 21:17:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:51.704 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:51.704 21:17:40 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:14:51.704 21:17:40 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:14:51.704 21:17:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:51.704 21:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:51.704 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:51.962 ************************************ 00:14:51.962 START TEST rpc_daemon_integrity 00:14:51.962 ************************************ 00:14:51.962 21:17:40 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:14:51.962 21:17:40 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:51.962 21:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.962 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:14:51.963 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.963 21:17:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:14:51.963 21:17:41 -- rpc/rpc.sh@13 -- # jq length 00:14:51.963 21:17:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:14:51.963 21:17:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:14:51.963 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.963 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:51.963 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.963 21:17:41 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:14:51.963 21:17:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:14:51.963 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.963 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:51.963 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.963 21:17:41 -- rpc/rpc.sh@16 -- # bdevs='[ 00:14:51.963 { 00:14:51.963 "aliases": [ 00:14:51.963 "e39ae647-d725-4ae9-8a08-65638c98a483" 00:14:51.963 ], 00:14:51.963 "assigned_rate_limits": { 00:14:51.963 "r_mbytes_per_sec": 0, 00:14:51.963 "rw_ios_per_sec": 0, 00:14:51.963 "rw_mbytes_per_sec": 0, 00:14:51.963 "w_mbytes_per_sec": 0 00:14:51.963 }, 00:14:51.963 "block_size": 512, 00:14:51.963 "claimed": false, 00:14:51.963 "driver_specific": {}, 00:14:51.963 "memory_domains": [ 00:14:51.963 { 00:14:51.963 "dma_device_id": "system", 00:14:51.963 "dma_device_type": 1 00:14:51.963 }, 00:14:51.963 { 00:14:51.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.963 "dma_device_type": 2 00:14:51.963 } 00:14:51.963 ], 00:14:51.963 "name": "Malloc3", 00:14:51.963 "num_blocks": 16384, 00:14:51.963 "product_name": "Malloc disk", 00:14:51.963 "supported_io_types": { 00:14:51.963 "abort": true, 00:14:51.963 "compare": false, 00:14:51.963 "compare_and_write": false, 00:14:51.963 "flush": true, 00:14:51.963 "nvme_admin": false, 00:14:51.963 "nvme_io": false, 00:14:51.963 "read": true, 00:14:51.963 "reset": true, 00:14:51.963 "unmap": true, 00:14:51.963 "write": true, 00:14:51.963 "write_zeroes": true 00:14:51.963 }, 00:14:51.963 "uuid": "e39ae647-d725-4ae9-8a08-65638c98a483", 00:14:51.963 "zoned": false 00:14:51.963 } 00:14:51.963 ]' 00:14:51.963 21:17:41 -- rpc/rpc.sh@17 -- # jq length 00:14:51.963 21:17:41 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:14:51.963 21:17:41 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:14:51.963 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.963 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:51.963 [2024-04-26 21:17:41.148493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:14:51.963 [2024-04-26 21:17:41.148535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.963 [2024-04-26 21:17:41.148552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136e5c0 00:14:51.963 [2024-04-26 21:17:41.148558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.963 [2024-04-26 21:17:41.149922] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.963 [2024-04-26 21:17:41.149955] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:14:51.963 Passthru0 00:14:51.963 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.963 21:17:41 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:14:51.963 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.963 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:51.963 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.963 21:17:41 -- rpc/rpc.sh@20 -- # bdevs='[ 00:14:51.963 { 00:14:51.963 "aliases": [ 00:14:51.963 "e39ae647-d725-4ae9-8a08-65638c98a483" 00:14:51.963 ], 00:14:51.963 "assigned_rate_limits": { 00:14:51.963 "r_mbytes_per_sec": 0, 00:14:51.963 "rw_ios_per_sec": 0, 00:14:51.963 "rw_mbytes_per_sec": 0, 00:14:51.963 "w_mbytes_per_sec": 0 00:14:51.963 }, 00:14:51.963 "block_size": 512, 00:14:51.963 "claim_type": "exclusive_write", 00:14:51.963 "claimed": true, 00:14:51.963 "driver_specific": {}, 00:14:51.963 "memory_domains": [ 00:14:51.963 { 00:14:51.963 "dma_device_id": "system", 00:14:51.963 "dma_device_type": 1 00:14:51.963 }, 00:14:51.963 { 00:14:51.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.963 "dma_device_type": 2 00:14:51.963 } 00:14:51.963 ], 00:14:51.963 "name": "Malloc3", 00:14:51.963 "num_blocks": 16384, 00:14:51.963 "product_name": "Malloc disk", 00:14:51.963 "supported_io_types": { 00:14:51.963 "abort": true, 00:14:51.963 "compare": false, 00:14:51.963 "compare_and_write": false, 00:14:51.963 "flush": true, 00:14:51.963 "nvme_admin": false, 00:14:51.963 "nvme_io": false, 00:14:51.963 "read": true, 00:14:51.963 "reset": true, 00:14:51.963 "unmap": true, 00:14:51.963 "write": true, 00:14:51.963 "write_zeroes": true 00:14:51.963 }, 00:14:51.963 "uuid": "e39ae647-d725-4ae9-8a08-65638c98a483", 00:14:51.963 "zoned": false 00:14:51.963 }, 00:14:51.963 { 00:14:51.963 "aliases": [ 00:14:51.963 "99fdf80b-a148-519a-a59e-fc35b6d1654a" 00:14:51.963 ], 00:14:51.963 "assigned_rate_limits": { 00:14:51.963 "r_mbytes_per_sec": 0, 00:14:51.963 "rw_ios_per_sec": 0, 00:14:51.963 "rw_mbytes_per_sec": 0, 00:14:51.963 "w_mbytes_per_sec": 0 00:14:51.963 }, 00:14:51.963 "block_size": 512, 00:14:51.963 "claimed": false, 00:14:51.963 "driver_specific": { 00:14:51.963 "passthru": { 00:14:51.963 "base_bdev_name": "Malloc3", 00:14:51.963 "name": "Passthru0" 00:14:51.963 } 00:14:51.963 }, 00:14:51.963 "memory_domains": [ 00:14:51.963 { 00:14:51.963 "dma_device_id": "system", 00:14:51.963 "dma_device_type": 1 00:14:51.963 }, 00:14:51.963 { 00:14:51.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.963 "dma_device_type": 2 00:14:51.963 } 00:14:51.963 ], 00:14:51.963 "name": "Passthru0", 00:14:51.963 "num_blocks": 16384, 00:14:51.963 "product_name": "passthru", 00:14:51.963 "supported_io_types": { 00:14:51.963 "abort": true, 00:14:51.963 "compare": false, 00:14:51.963 "compare_and_write": false, 00:14:51.963 "flush": true, 00:14:51.963 "nvme_admin": false, 00:14:51.963 "nvme_io": false, 00:14:51.963 "read": true, 00:14:51.963 "reset": true, 00:14:51.963 "unmap": true, 00:14:51.963 "write": true, 00:14:51.963 "write_zeroes": true 00:14:51.963 }, 00:14:51.963 "uuid": "99fdf80b-a148-519a-a59e-fc35b6d1654a", 00:14:51.963 "zoned": false 00:14:51.963 } 00:14:51.963 ]' 00:14:51.963 21:17:41 -- rpc/rpc.sh@21 -- # jq length 00:14:52.223 21:17:41 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:14:52.223 21:17:41 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:14:52.223 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.223 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.223 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.223 21:17:41 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:52.223 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.223 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.223 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.223 21:17:41 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:52.223 21:17:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:52.223 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.223 21:17:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:52.223 21:17:41 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:14:52.223 21:17:41 -- rpc/rpc.sh@26 -- # jq length 00:14:52.223 ************************************ 00:14:52.223 END TEST rpc_daemon_integrity 00:14:52.223 ************************************ 00:14:52.223 21:17:41 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:14:52.223 00:14:52.223 real 0m0.291s 00:14:52.223 user 0m0.181s 00:14:52.223 sys 0m0.040s 00:14:52.223 21:17:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.223 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.223 21:17:41 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:52.223 21:17:41 -- rpc/rpc.sh@84 -- # killprocess 73146 00:14:52.223 21:17:41 -- common/autotest_common.sh@936 -- # '[' -z 73146 ']' 00:14:52.223 21:17:41 -- common/autotest_common.sh@940 -- # kill -0 73146 00:14:52.223 21:17:41 -- common/autotest_common.sh@941 -- # uname 00:14:52.223 21:17:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:52.223 21:17:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73146 00:14:52.223 killing process with pid 73146 00:14:52.223 21:17:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:52.223 21:17:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:52.223 21:17:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73146' 00:14:52.223 21:17:41 -- common/autotest_common.sh@955 -- # kill 73146 00:14:52.223 21:17:41 -- common/autotest_common.sh@960 -- # wait 73146 00:14:52.480 00:14:52.480 real 0m3.371s 00:14:52.480 user 0m4.407s 00:14:52.480 sys 0m0.963s 00:14:52.480 21:17:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.480 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.480 ************************************ 00:14:52.480 END TEST rpc 00:14:52.480 ************************************ 00:14:52.480 21:17:41 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:52.480 21:17:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:52.480 21:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.480 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.740 ************************************ 00:14:52.740 START TEST skip_rpc 00:14:52.740 ************************************ 00:14:52.740 21:17:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:14:52.740 * Looking for test storage... 00:14:52.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:14:52.740 21:17:41 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:52.740 21:17:41 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:14:52.740 21:17:41 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:14:52.740 21:17:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:52.740 21:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.740 21:17:41 -- common/autotest_common.sh@10 -- # set +x 00:14:52.999 ************************************ 00:14:52.999 START TEST skip_rpc 00:14:52.999 ************************************ 00:14:52.999 21:17:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:14:52.999 21:17:42 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:14:52.999 21:17:42 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=73444 00:14:52.999 21:17:42 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:52.999 21:17:42 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:14:52.999 [2024-04-26 21:17:42.056571] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:52.999 [2024-04-26 21:17:42.056725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73444 ] 00:14:52.999 [2024-04-26 21:17:42.195416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.999 [2024-04-26 21:17:42.249284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.323 21:17:47 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:14:58.323 21:17:47 -- common/autotest_common.sh@638 -- # local es=0 00:14:58.323 21:17:47 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:14:58.323 21:17:47 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:14:58.323 21:17:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:58.323 21:17:47 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:14:58.323 21:17:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:58.323 21:17:47 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:14:58.323 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.323 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:58.323 2024/04/26 21:17:47 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:14:58.323 21:17:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:58.323 21:17:47 -- common/autotest_common.sh@641 -- # es=1 00:14:58.323 21:17:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:58.323 21:17:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:58.323 21:17:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:58.323 21:17:47 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:14:58.323 21:17:47 -- rpc/skip_rpc.sh@23 -- # killprocess 73444 00:14:58.323 21:17:47 -- common/autotest_common.sh@936 -- # '[' -z 73444 ']' 00:14:58.323 21:17:47 -- common/autotest_common.sh@940 -- # kill -0 73444 00:14:58.323 21:17:47 -- common/autotest_common.sh@941 -- # uname 00:14:58.323 21:17:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:58.323 21:17:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73444 00:14:58.323 killing process with pid 73444 00:14:58.323 21:17:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:58.323 21:17:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:58.323 21:17:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73444' 00:14:58.323 21:17:47 -- common/autotest_common.sh@955 -- # kill 73444 00:14:58.323 21:17:47 -- common/autotest_common.sh@960 -- # wait 73444 00:14:58.323 00:14:58.323 real 0m5.372s 00:14:58.323 user 0m5.055s 00:14:58.323 sys 0m0.235s 00:14:58.323 ************************************ 00:14:58.323 END TEST skip_rpc 00:14:58.323 ************************************ 00:14:58.323 21:17:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:58.323 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:58.323 21:17:47 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:14:58.323 21:17:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:58.323 21:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.323 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:58.323 ************************************ 00:14:58.323 START TEST skip_rpc_with_json 00:14:58.323 ************************************ 00:14:58.323 21:17:47 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:14:58.323 21:17:47 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:14:58.323 21:17:47 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=73540 00:14:58.323 21:17:47 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:58.323 21:17:47 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:14:58.323 21:17:47 -- rpc/skip_rpc.sh@31 -- # waitforlisten 73540 00:14:58.323 21:17:47 -- common/autotest_common.sh@817 -- # '[' -z 73540 ']' 00:14:58.323 21:17:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.323 21:17:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:58.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.323 21:17:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.323 21:17:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:58.323 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:58.582 [2024-04-26 21:17:47.577503] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:58.582 [2024-04-26 21:17:47.577661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73540 ] 00:14:58.582 [2024-04-26 21:17:47.701250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.582 [2024-04-26 21:17:47.750760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.841 21:17:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:58.841 21:17:47 -- common/autotest_common.sh@850 -- # return 0 00:14:58.841 21:17:47 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:14:58.841 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.841 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:58.841 [2024-04-26 21:17:47.968744] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:14:58.841 2024/04/26 21:17:47 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:14:58.841 request: 00:14:58.841 { 00:14:58.841 "method": "nvmf_get_transports", 00:14:58.841 "params": { 00:14:58.841 "trtype": "tcp" 00:14:58.841 } 00:14:58.841 } 00:14:58.841 Got JSON-RPC error response 00:14:58.841 GoRPCClient: error on JSON-RPC call 00:14:58.841 21:17:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:58.841 21:17:47 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:14:58.841 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.841 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:58.841 [2024-04-26 21:17:47.980803] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.841 21:17:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.841 21:17:47 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:14:58.841 21:17:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.841 21:17:47 -- common/autotest_common.sh@10 -- # set +x 00:14:59.101 21:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.101 21:17:48 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:59.101 { 00:14:59.101 "subsystems": [ 00:14:59.101 { 00:14:59.101 "subsystem": "keyring", 00:14:59.101 "config": [] 00:14:59.101 }, 00:14:59.101 { 00:14:59.101 "subsystem": "iobuf", 00:14:59.101 "config": [ 00:14:59.101 { 00:14:59.101 "method": "iobuf_set_options", 00:14:59.101 "params": { 00:14:59.101 "large_bufsize": 135168, 00:14:59.101 "large_pool_count": 1024, 00:14:59.101 "small_bufsize": 8192, 00:14:59.101 "small_pool_count": 8192 00:14:59.101 } 00:14:59.101 } 00:14:59.101 ] 00:14:59.101 }, 00:14:59.101 { 00:14:59.101 "subsystem": "sock", 00:14:59.101 "config": [ 00:14:59.101 { 00:14:59.101 "method": "sock_impl_set_options", 00:14:59.101 "params": { 00:14:59.101 "enable_ktls": false, 00:14:59.101 "enable_placement_id": 0, 00:14:59.101 "enable_quickack": false, 00:14:59.101 "enable_recv_pipe": true, 00:14:59.101 "enable_zerocopy_send_client": false, 00:14:59.101 "enable_zerocopy_send_server": true, 00:14:59.101 "impl_name": "posix", 00:14:59.101 "recv_buf_size": 2097152, 00:14:59.101 "send_buf_size": 2097152, 00:14:59.101 "tls_version": 0, 00:14:59.101 "zerocopy_threshold": 0 00:14:59.101 } 00:14:59.101 }, 00:14:59.101 { 00:14:59.101 "method": "sock_impl_set_options", 00:14:59.101 "params": { 00:14:59.101 "enable_ktls": false, 00:14:59.101 "enable_placement_id": 0, 00:14:59.101 "enable_quickack": false, 00:14:59.101 "enable_recv_pipe": true, 00:14:59.101 "enable_zerocopy_send_client": false, 00:14:59.101 "enable_zerocopy_send_server": true, 00:14:59.101 "impl_name": "ssl", 00:14:59.101 "recv_buf_size": 4096, 00:14:59.101 "send_buf_size": 4096, 00:14:59.101 "tls_version": 0, 00:14:59.101 "zerocopy_threshold": 0 00:14:59.101 } 00:14:59.101 } 00:14:59.101 ] 00:14:59.101 }, 00:14:59.101 { 00:14:59.101 "subsystem": "vmd", 00:14:59.101 "config": [] 00:14:59.101 }, 00:14:59.101 { 00:14:59.101 "subsystem": "accel", 00:14:59.101 "config": [ 00:14:59.101 { 00:14:59.101 "method": "accel_set_options", 00:14:59.101 "params": { 00:14:59.101 "buf_count": 2048, 00:14:59.101 "large_cache_size": 16, 00:14:59.101 "sequence_count": 2048, 00:14:59.101 "small_cache_size": 128, 00:14:59.101 "task_count": 2048 00:14:59.101 } 00:14:59.101 } 00:14:59.102 ] 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "subsystem": "bdev", 00:14:59.102 "config": [ 00:14:59.102 { 00:14:59.102 "method": "bdev_set_options", 00:14:59.102 "params": { 00:14:59.102 "bdev_auto_examine": true, 00:14:59.102 "bdev_io_cache_size": 256, 00:14:59.102 "bdev_io_pool_size": 65535, 00:14:59.102 "iobuf_large_cache_size": 16, 00:14:59.102 "iobuf_small_cache_size": 128 00:14:59.102 } 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "method": "bdev_raid_set_options", 00:14:59.102 "params": { 00:14:59.102 "process_window_size_kb": 1024 00:14:59.102 } 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "method": "bdev_iscsi_set_options", 00:14:59.102 "params": { 00:14:59.102 "timeout_sec": 30 00:14:59.102 } 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "method": "bdev_nvme_set_options", 00:14:59.102 "params": { 00:14:59.102 "action_on_timeout": "none", 00:14:59.102 "allow_accel_sequence": false, 00:14:59.102 "arbitration_burst": 0, 00:14:59.102 "bdev_retry_count": 3, 00:14:59.102 "ctrlr_loss_timeout_sec": 0, 00:14:59.102 "delay_cmd_submit": true, 00:14:59.102 "dhchap_dhgroups": [ 00:14:59.102 "null", 00:14:59.102 "ffdhe2048", 00:14:59.102 "ffdhe3072", 00:14:59.102 "ffdhe4096", 00:14:59.102 "ffdhe6144", 00:14:59.102 "ffdhe8192" 00:14:59.102 ], 00:14:59.102 "dhchap_digests": [ 00:14:59.102 "sha256", 00:14:59.102 "sha384", 00:14:59.102 "sha512" 00:14:59.102 ], 00:14:59.102 "disable_auto_failback": false, 00:14:59.102 "fast_io_fail_timeout_sec": 0, 00:14:59.102 "generate_uuids": false, 00:14:59.102 "high_priority_weight": 0, 00:14:59.102 "io_path_stat": false, 00:14:59.102 "io_queue_requests": 0, 00:14:59.102 "keep_alive_timeout_ms": 10000, 00:14:59.102 "low_priority_weight": 0, 00:14:59.102 "medium_priority_weight": 0, 00:14:59.102 "nvme_adminq_poll_period_us": 10000, 00:14:59.102 "nvme_error_stat": false, 00:14:59.102 "nvme_ioq_poll_period_us": 0, 00:14:59.102 "rdma_cm_event_timeout_ms": 0, 00:14:59.102 "rdma_max_cq_size": 0, 00:14:59.102 "rdma_srq_size": 0, 00:14:59.102 "reconnect_delay_sec": 0, 00:14:59.102 "timeout_admin_us": 0, 00:14:59.102 "timeout_us": 0, 00:14:59.102 "transport_ack_timeout": 0, 00:14:59.102 "transport_retry_count": 4, 00:14:59.102 "transport_tos": 0 00:14:59.102 } 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "method": "bdev_nvme_set_hotplug", 00:14:59.102 "params": { 00:14:59.102 "enable": false, 00:14:59.102 "period_us": 100000 00:14:59.102 } 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "method": "bdev_wait_for_examine" 00:14:59.102 } 00:14:59.102 ] 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "subsystem": "scsi", 00:14:59.102 "config": null 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "subsystem": "scheduler", 00:14:59.102 "config": [ 00:14:59.102 { 00:14:59.102 "method": "framework_set_scheduler", 00:14:59.102 "params": { 00:14:59.102 "name": "static" 00:14:59.102 } 00:14:59.102 } 00:14:59.102 ] 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "subsystem": "vhost_scsi", 00:14:59.102 "config": [] 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "subsystem": "vhost_blk", 00:14:59.102 "config": [] 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "subsystem": "ublk", 00:14:59.102 "config": [] 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "subsystem": "nbd", 00:14:59.102 "config": [] 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "subsystem": "nvmf", 00:14:59.102 "config": [ 00:14:59.102 { 00:14:59.102 "method": "nvmf_set_config", 00:14:59.102 "params": { 00:14:59.102 "admin_cmd_passthru": { 00:14:59.102 "identify_ctrlr": false 00:14:59.102 }, 00:14:59.102 "discovery_filter": "match_any" 00:14:59.102 } 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "method": "nvmf_set_max_subsystems", 00:14:59.102 "params": { 00:14:59.102 "max_subsystems": 1024 00:14:59.102 } 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "method": "nvmf_set_crdt", 00:14:59.102 "params": { 00:14:59.102 "crdt1": 0, 00:14:59.102 "crdt2": 0, 00:14:59.102 "crdt3": 0 00:14:59.102 } 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "method": "nvmf_create_transport", 00:14:59.102 "params": { 00:14:59.102 "abort_timeout_sec": 1, 00:14:59.102 "ack_timeout": 0, 00:14:59.102 "buf_cache_size": 4294967295, 00:14:59.102 "c2h_success": true, 00:14:59.102 "data_wr_pool_size": 0, 00:14:59.102 "dif_insert_or_strip": false, 00:14:59.102 "in_capsule_data_size": 4096, 00:14:59.102 "io_unit_size": 131072, 00:14:59.102 "max_aq_depth": 128, 00:14:59.102 "max_io_qpairs_per_ctrlr": 127, 00:14:59.102 "max_io_size": 131072, 00:14:59.102 "max_queue_depth": 128, 00:14:59.102 "num_shared_buffers": 511, 00:14:59.102 "sock_priority": 0, 00:14:59.102 "trtype": "TCP", 00:14:59.102 "zcopy": false 00:14:59.102 } 00:14:59.102 } 00:14:59.102 ] 00:14:59.102 }, 00:14:59.102 { 00:14:59.102 "subsystem": "iscsi", 00:14:59.102 "config": [ 00:14:59.102 { 00:14:59.102 "method": "iscsi_set_options", 00:14:59.102 "params": { 00:14:59.102 "allow_duplicated_isid": false, 00:14:59.102 "chap_group": 0, 00:14:59.102 "data_out_pool_size": 2048, 00:14:59.102 "default_time2retain": 20, 00:14:59.102 "default_time2wait": 2, 00:14:59.102 "disable_chap": false, 00:14:59.102 "error_recovery_level": 0, 00:14:59.102 "first_burst_length": 8192, 00:14:59.102 "immediate_data": true, 00:14:59.102 "immediate_data_pool_size": 16384, 00:14:59.102 "max_connections_per_session": 2, 00:14:59.102 "max_large_datain_per_connection": 64, 00:14:59.102 "max_queue_depth": 64, 00:14:59.102 "max_r2t_per_connection": 4, 00:14:59.102 "max_sessions": 128, 00:14:59.102 "mutual_chap": false, 00:14:59.102 "node_base": "iqn.2016-06.io.spdk", 00:14:59.102 "nop_in_interval": 30, 00:14:59.102 "nop_timeout": 60, 00:14:59.102 "pdu_pool_size": 36864, 00:14:59.102 "require_chap": false 00:14:59.102 } 00:14:59.102 } 00:14:59.102 ] 00:14:59.102 } 00:14:59.102 ] 00:14:59.102 } 00:14:59.102 21:17:48 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:59.102 21:17:48 -- rpc/skip_rpc.sh@40 -- # killprocess 73540 00:14:59.102 21:17:48 -- common/autotest_common.sh@936 -- # '[' -z 73540 ']' 00:14:59.102 21:17:48 -- common/autotest_common.sh@940 -- # kill -0 73540 00:14:59.102 21:17:48 -- common/autotest_common.sh@941 -- # uname 00:14:59.102 21:17:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.102 21:17:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73540 00:14:59.102 21:17:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:59.102 21:17:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:59.102 21:17:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73540' 00:14:59.102 killing process with pid 73540 00:14:59.102 21:17:48 -- common/autotest_common.sh@955 -- # kill 73540 00:14:59.102 21:17:48 -- common/autotest_common.sh@960 -- # wait 73540 00:14:59.361 21:17:48 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=73566 00:14:59.361 21:17:48 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:14:59.361 21:17:48 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:04.644 21:17:53 -- rpc/skip_rpc.sh@50 -- # killprocess 73566 00:15:04.644 21:17:53 -- common/autotest_common.sh@936 -- # '[' -z 73566 ']' 00:15:04.644 21:17:53 -- common/autotest_common.sh@940 -- # kill -0 73566 00:15:04.644 21:17:53 -- common/autotest_common.sh@941 -- # uname 00:15:04.644 21:17:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.644 21:17:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73566 00:15:04.644 killing process with pid 73566 00:15:04.644 21:17:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.644 21:17:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.644 21:17:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73566' 00:15:04.644 21:17:53 -- common/autotest_common.sh@955 -- # kill 73566 00:15:04.644 21:17:53 -- common/autotest_common.sh@960 -- # wait 73566 00:15:04.644 21:17:53 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:04.644 21:17:53 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:04.644 00:15:04.644 real 0m6.335s 00:15:04.644 user 0m5.932s 00:15:04.644 sys 0m0.544s 00:15:04.644 21:17:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.644 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:15:04.644 ************************************ 00:15:04.644 END TEST skip_rpc_with_json 00:15:04.644 ************************************ 00:15:04.903 21:17:53 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:04.903 21:17:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:04.903 21:17:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.903 21:17:53 -- common/autotest_common.sh@10 -- # set +x 00:15:04.903 ************************************ 00:15:04.903 START TEST skip_rpc_with_delay 00:15:04.903 ************************************ 00:15:04.903 21:17:53 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:15:04.903 21:17:53 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:04.903 21:17:53 -- common/autotest_common.sh@638 -- # local es=0 00:15:04.903 21:17:53 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:04.903 21:17:53 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:04.903 21:17:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.903 21:17:53 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:04.903 21:17:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.903 21:17:53 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:04.903 21:17:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.903 21:17:53 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:04.903 21:17:53 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:04.903 21:17:53 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:04.903 [2024-04-26 21:17:54.055804] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:04.903 [2024-04-26 21:17:54.055981] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:15:04.903 21:17:54 -- common/autotest_common.sh@641 -- # es=1 00:15:04.903 21:17:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:04.903 ************************************ 00:15:04.903 END TEST skip_rpc_with_delay 00:15:04.903 ************************************ 00:15:04.903 21:17:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:04.903 21:17:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:04.903 00:15:04.903 real 0m0.078s 00:15:04.903 user 0m0.045s 00:15:04.903 sys 0m0.031s 00:15:04.903 21:17:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.903 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:15:04.903 21:17:54 -- rpc/skip_rpc.sh@77 -- # uname 00:15:04.903 21:17:54 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:15:04.903 21:17:54 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:15:04.903 21:17:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:04.903 21:17:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.903 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:15:05.162 ************************************ 00:15:05.162 START TEST exit_on_failed_rpc_init 00:15:05.162 ************************************ 00:15:05.162 21:17:54 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:15:05.162 21:17:54 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=73684 00:15:05.162 21:17:54 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:05.162 21:17:54 -- rpc/skip_rpc.sh@63 -- # waitforlisten 73684 00:15:05.162 21:17:54 -- common/autotest_common.sh@817 -- # '[' -z 73684 ']' 00:15:05.162 21:17:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.162 21:17:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.162 21:17:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.162 21:17:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.162 21:17:54 -- common/autotest_common.sh@10 -- # set +x 00:15:05.162 [2024-04-26 21:17:54.270268] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:05.162 [2024-04-26 21:17:54.270352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73684 ] 00:15:05.162 [2024-04-26 21:17:54.410346] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.421 [2024-04-26 21:17:54.461458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.987 21:17:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.987 21:17:55 -- common/autotest_common.sh@850 -- # return 0 00:15:05.987 21:17:55 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:05.987 21:17:55 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:05.987 21:17:55 -- common/autotest_common.sh@638 -- # local es=0 00:15:05.987 21:17:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:05.987 21:17:55 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.987 21:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:05.987 21:17:55 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.987 21:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:05.987 21:17:55 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.987 21:17:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:05.987 21:17:55 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.987 21:17:55 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:05.987 21:17:55 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:05.987 [2024-04-26 21:17:55.211542] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:05.987 [2024-04-26 21:17:55.211614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73714 ] 00:15:06.248 [2024-04-26 21:17:55.350986] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.248 [2024-04-26 21:17:55.400755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.248 [2024-04-26 21:17:55.400911] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:06.248 [2024-04-26 21:17:55.400957] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:06.248 [2024-04-26 21:17:55.400976] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:06.248 21:17:55 -- common/autotest_common.sh@641 -- # es=234 00:15:06.248 21:17:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:06.248 21:17:55 -- common/autotest_common.sh@650 -- # es=106 00:15:06.248 21:17:55 -- common/autotest_common.sh@651 -- # case "$es" in 00:15:06.248 21:17:55 -- common/autotest_common.sh@658 -- # es=1 00:15:06.248 21:17:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:06.248 21:17:55 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:06.248 21:17:55 -- rpc/skip_rpc.sh@70 -- # killprocess 73684 00:15:06.248 21:17:55 -- common/autotest_common.sh@936 -- # '[' -z 73684 ']' 00:15:06.248 21:17:55 -- common/autotest_common.sh@940 -- # kill -0 73684 00:15:06.248 21:17:55 -- common/autotest_common.sh@941 -- # uname 00:15:06.248 21:17:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:06.248 21:17:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73684 00:15:06.510 21:17:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:06.510 21:17:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:06.510 21:17:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73684' 00:15:06.510 killing process with pid 73684 00:15:06.510 21:17:55 -- common/autotest_common.sh@955 -- # kill 73684 00:15:06.510 21:17:55 -- common/autotest_common.sh@960 -- # wait 73684 00:15:06.768 00:15:06.768 real 0m1.628s 00:15:06.768 user 0m1.828s 00:15:06.768 sys 0m0.380s 00:15:06.768 21:17:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.768 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:15:06.768 ************************************ 00:15:06.768 END TEST exit_on_failed_rpc_init 00:15:06.768 ************************************ 00:15:06.768 21:17:55 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:06.768 00:15:06.768 real 0m14.079s 00:15:06.768 user 0m13.097s 00:15:06.768 sys 0m1.576s 00:15:06.768 21:17:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.768 ************************************ 00:15:06.768 END TEST skip_rpc 00:15:06.768 ************************************ 00:15:06.768 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:15:06.768 21:17:55 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:06.768 21:17:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:06.768 21:17:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.768 21:17:55 -- common/autotest_common.sh@10 -- # set +x 00:15:07.043 ************************************ 00:15:07.043 START TEST rpc_client 00:15:07.043 ************************************ 00:15:07.043 21:17:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:07.043 * Looking for test storage... 00:15:07.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:07.043 21:17:56 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:07.043 OK 00:15:07.043 21:17:56 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:07.043 00:15:07.043 real 0m0.155s 00:15:07.043 user 0m0.073s 00:15:07.043 sys 0m0.091s 00:15:07.043 21:17:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:07.043 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:15:07.043 ************************************ 00:15:07.043 END TEST rpc_client 00:15:07.043 ************************************ 00:15:07.043 21:17:56 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:07.043 21:17:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:07.043 21:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.043 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:15:07.302 ************************************ 00:15:07.302 START TEST json_config 00:15:07.302 ************************************ 00:15:07.302 21:17:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:07.302 21:17:56 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:07.302 21:17:56 -- nvmf/common.sh@7 -- # uname -s 00:15:07.302 21:17:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.302 21:17:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.302 21:17:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.302 21:17:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.302 21:17:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.302 21:17:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.302 21:17:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.302 21:17:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.302 21:17:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.302 21:17:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.302 21:17:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:15:07.302 21:17:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:15:07.302 21:17:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.302 21:17:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.302 21:17:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:07.302 21:17:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.302 21:17:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.302 21:17:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.302 21:17:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.302 21:17:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.302 21:17:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.302 21:17:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.302 21:17:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.302 21:17:56 -- paths/export.sh@5 -- # export PATH 00:15:07.302 21:17:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.303 21:17:56 -- nvmf/common.sh@47 -- # : 0 00:15:07.303 21:17:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.303 21:17:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.303 21:17:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.303 21:17:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.303 21:17:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.303 21:17:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.303 21:17:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.303 21:17:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.303 21:17:56 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:07.303 21:17:56 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:07.303 21:17:56 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:07.303 21:17:56 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:07.303 21:17:56 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:07.303 21:17:56 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:15:07.303 21:17:56 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:15:07.303 21:17:56 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:15:07.303 21:17:56 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:15:07.303 21:17:56 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:15:07.303 21:17:56 -- json_config/json_config.sh@33 -- # declare -A app_params 00:15:07.303 21:17:56 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:15:07.303 21:17:56 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:15:07.303 21:17:56 -- json_config/json_config.sh@40 -- # last_event_id=0 00:15:07.303 21:17:56 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:07.303 21:17:56 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:15:07.303 INFO: JSON configuration test init 00:15:07.303 21:17:56 -- json_config/json_config.sh@357 -- # json_config_test_init 00:15:07.303 21:17:56 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:15:07.303 21:17:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:07.303 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:15:07.303 21:17:56 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:15:07.303 21:17:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:07.303 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:15:07.303 21:17:56 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:15:07.303 21:17:56 -- json_config/common.sh@9 -- # local app=target 00:15:07.303 21:17:56 -- json_config/common.sh@10 -- # shift 00:15:07.303 21:17:56 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:07.303 21:17:56 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:07.303 21:17:56 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:07.303 21:17:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:07.303 21:17:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:07.303 21:17:56 -- json_config/common.sh@22 -- # app_pid["$app"]=73843 00:15:07.303 21:17:56 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:07.303 Waiting for target to run... 00:15:07.303 21:17:56 -- json_config/common.sh@25 -- # waitforlisten 73843 /var/tmp/spdk_tgt.sock 00:15:07.303 21:17:56 -- common/autotest_common.sh@817 -- # '[' -z 73843 ']' 00:15:07.303 21:17:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:07.303 21:17:56 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:15:07.303 21:17:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:07.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:07.303 21:17:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:07.303 21:17:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:07.303 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:15:07.303 [2024-04-26 21:17:56.523033] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:07.303 [2024-04-26 21:17:56.523205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73843 ] 00:15:07.869 [2024-04-26 21:17:56.878871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.870 [2024-04-26 21:17:56.912282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.435 21:17:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:08.435 21:17:57 -- common/autotest_common.sh@850 -- # return 0 00:15:08.435 21:17:57 -- json_config/common.sh@26 -- # echo '' 00:15:08.435 00:15:08.435 21:17:57 -- json_config/json_config.sh@269 -- # create_accel_config 00:15:08.435 21:17:57 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:15:08.435 21:17:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:08.435 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:15:08.435 21:17:57 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:15:08.435 21:17:57 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:15:08.435 21:17:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:08.435 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:15:08.435 21:17:57 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:15:08.435 21:17:57 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:15:08.435 21:17:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:15:08.694 21:17:57 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:15:08.694 21:17:57 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:15:08.694 21:17:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:08.694 21:17:57 -- common/autotest_common.sh@10 -- # set +x 00:15:08.694 21:17:57 -- json_config/json_config.sh@45 -- # local ret=0 00:15:08.694 21:17:57 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:15:08.694 21:17:57 -- json_config/json_config.sh@46 -- # local enabled_types 00:15:08.694 21:17:57 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:15:08.694 21:17:57 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:15:08.694 21:17:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:15:08.953 21:17:58 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:15:08.953 21:17:58 -- json_config/json_config.sh@48 -- # local get_types 00:15:08.953 21:17:58 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:15:08.953 21:17:58 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:15:08.953 21:17:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:08.953 21:17:58 -- common/autotest_common.sh@10 -- # set +x 00:15:09.212 21:17:58 -- json_config/json_config.sh@55 -- # return 0 00:15:09.212 21:17:58 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:15:09.212 21:17:58 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:15:09.212 21:17:58 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:15:09.212 21:17:58 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:15:09.212 21:17:58 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:15:09.212 21:17:58 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:15:09.213 21:17:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:09.213 21:17:58 -- common/autotest_common.sh@10 -- # set +x 00:15:09.213 21:17:58 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:15:09.213 21:17:58 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:15:09.213 21:17:58 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:15:09.213 21:17:58 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:09.213 21:17:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:09.213 MallocForNvmf0 00:15:09.213 21:17:58 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:15:09.213 21:17:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:15:09.472 MallocForNvmf1 00:15:09.472 21:17:58 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:15:09.472 21:17:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:15:09.741 [2024-04-26 21:17:58.854973] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.741 21:17:58 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.741 21:17:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:10.000 21:17:59 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:15:10.000 21:17:59 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:15:10.258 21:17:59 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:15:10.258 21:17:59 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:15:10.258 21:17:59 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:15:10.258 21:17:59 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:15:10.517 [2024-04-26 21:17:59.681848] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:10.517 21:17:59 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:15:10.517 21:17:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:10.517 21:17:59 -- common/autotest_common.sh@10 -- # set +x 00:15:10.517 21:17:59 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:15:10.517 21:17:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:10.517 21:17:59 -- common/autotest_common.sh@10 -- # set +x 00:15:10.775 21:17:59 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:15:10.775 21:17:59 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:10.775 21:17:59 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:10.775 MallocBdevForConfigChangeCheck 00:15:10.775 21:18:00 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:15:10.775 21:18:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:10.775 21:18:00 -- common/autotest_common.sh@10 -- # set +x 00:15:11.034 21:18:00 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:15:11.034 21:18:00 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:11.294 21:18:00 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:15:11.294 INFO: shutting down applications... 00:15:11.294 21:18:00 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:15:11.294 21:18:00 -- json_config/json_config.sh@368 -- # json_config_clear target 00:15:11.294 21:18:00 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:15:11.294 21:18:00 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:15:11.553 Calling clear_iscsi_subsystem 00:15:11.554 Calling clear_nvmf_subsystem 00:15:11.554 Calling clear_nbd_subsystem 00:15:11.554 Calling clear_ublk_subsystem 00:15:11.554 Calling clear_vhost_blk_subsystem 00:15:11.554 Calling clear_vhost_scsi_subsystem 00:15:11.554 Calling clear_bdev_subsystem 00:15:11.554 21:18:00 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:15:11.554 21:18:00 -- json_config/json_config.sh@343 -- # count=100 00:15:11.554 21:18:00 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:15:11.554 21:18:00 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:11.554 21:18:00 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:15:11.554 21:18:00 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:15:12.123 21:18:01 -- json_config/json_config.sh@345 -- # break 00:15:12.123 21:18:01 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:15:12.123 21:18:01 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:15:12.123 21:18:01 -- json_config/common.sh@31 -- # local app=target 00:15:12.123 21:18:01 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:12.123 21:18:01 -- json_config/common.sh@35 -- # [[ -n 73843 ]] 00:15:12.123 21:18:01 -- json_config/common.sh@38 -- # kill -SIGINT 73843 00:15:12.123 21:18:01 -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:12.123 21:18:01 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:12.123 21:18:01 -- json_config/common.sh@41 -- # kill -0 73843 00:15:12.123 21:18:01 -- json_config/common.sh@45 -- # sleep 0.5 00:15:12.692 21:18:01 -- json_config/common.sh@40 -- # (( i++ )) 00:15:12.692 21:18:01 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:12.692 21:18:01 -- json_config/common.sh@41 -- # kill -0 73843 00:15:12.692 21:18:01 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:12.692 21:18:01 -- json_config/common.sh@43 -- # break 00:15:12.692 21:18:01 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:12.692 SPDK target shutdown done 00:15:12.692 21:18:01 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:12.692 INFO: relaunching applications... 00:15:12.692 21:18:01 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:15:12.692 21:18:01 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:12.692 21:18:01 -- json_config/common.sh@9 -- # local app=target 00:15:12.692 21:18:01 -- json_config/common.sh@10 -- # shift 00:15:12.692 21:18:01 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:12.692 21:18:01 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:12.692 21:18:01 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:12.692 21:18:01 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:12.692 21:18:01 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:12.692 21:18:01 -- json_config/common.sh@22 -- # app_pid["$app"]=74111 00:15:12.692 Waiting for target to run... 00:15:12.692 21:18:01 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:12.692 21:18:01 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:12.692 21:18:01 -- json_config/common.sh@25 -- # waitforlisten 74111 /var/tmp/spdk_tgt.sock 00:15:12.692 21:18:01 -- common/autotest_common.sh@817 -- # '[' -z 74111 ']' 00:15:12.692 21:18:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:12.692 21:18:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:12.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:12.692 21:18:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:12.692 21:18:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:12.692 21:18:01 -- common/autotest_common.sh@10 -- # set +x 00:15:12.692 [2024-04-26 21:18:01.741860] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:12.692 [2024-04-26 21:18:01.741932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74111 ] 00:15:12.951 [2024-04-26 21:18:02.103530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.951 [2024-04-26 21:18:02.146172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.210 [2024-04-26 21:18:02.443497] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.470 [2024-04-26 21:18:02.475572] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:13.470 21:18:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:13.470 21:18:02 -- common/autotest_common.sh@850 -- # return 0 00:15:13.470 00:15:13.470 21:18:02 -- json_config/common.sh@26 -- # echo '' 00:15:13.470 21:18:02 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:15:13.470 INFO: Checking if target configuration is the same... 00:15:13.470 21:18:02 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:15:13.470 21:18:02 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:15:13.470 21:18:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:13.470 21:18:02 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:13.470 + '[' 2 -ne 2 ']' 00:15:13.470 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:13.470 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:13.470 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:13.470 +++ basename /dev/fd/62 00:15:13.470 ++ mktemp /tmp/62.XXX 00:15:13.470 + tmp_file_1=/tmp/62.Z42 00:15:13.470 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:13.470 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:13.470 + tmp_file_2=/tmp/spdk_tgt_config.json.pck 00:15:13.470 + ret=0 00:15:13.470 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:14.038 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:14.038 + diff -u /tmp/62.Z42 /tmp/spdk_tgt_config.json.pck 00:15:14.038 INFO: JSON config files are the same 00:15:14.038 + echo 'INFO: JSON config files are the same' 00:15:14.038 + rm /tmp/62.Z42 /tmp/spdk_tgt_config.json.pck 00:15:14.038 + exit 0 00:15:14.038 21:18:03 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:15:14.038 INFO: changing configuration and checking if this can be detected... 00:15:14.038 21:18:03 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:15:14.038 21:18:03 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:14.038 21:18:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:14.306 21:18:03 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:14.306 21:18:03 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:15:14.306 21:18:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:14.306 + '[' 2 -ne 2 ']' 00:15:14.306 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:14.306 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:14.306 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:14.306 +++ basename /dev/fd/62 00:15:14.306 ++ mktemp /tmp/62.XXX 00:15:14.306 + tmp_file_1=/tmp/62.9IN 00:15:14.306 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:14.306 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:14.306 + tmp_file_2=/tmp/spdk_tgt_config.json.rvt 00:15:14.306 + ret=0 00:15:14.306 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:14.603 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:14.603 + diff -u /tmp/62.9IN /tmp/spdk_tgt_config.json.rvt 00:15:14.603 + ret=1 00:15:14.603 + echo '=== Start of file: /tmp/62.9IN ===' 00:15:14.603 + cat /tmp/62.9IN 00:15:14.603 + echo '=== End of file: /tmp/62.9IN ===' 00:15:14.603 + echo '' 00:15:14.603 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rvt ===' 00:15:14.603 + cat /tmp/spdk_tgt_config.json.rvt 00:15:14.603 + echo '=== End of file: /tmp/spdk_tgt_config.json.rvt ===' 00:15:14.603 + echo '' 00:15:14.603 + rm /tmp/62.9IN /tmp/spdk_tgt_config.json.rvt 00:15:14.603 + exit 1 00:15:14.603 INFO: configuration change detected. 00:15:14.603 21:18:03 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:15:14.603 21:18:03 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:15:14.604 21:18:03 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:15:14.604 21:18:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:14.604 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:15:14.604 21:18:03 -- json_config/json_config.sh@307 -- # local ret=0 00:15:14.604 21:18:03 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:15:14.604 21:18:03 -- json_config/json_config.sh@317 -- # [[ -n 74111 ]] 00:15:14.604 21:18:03 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:15:14.604 21:18:03 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:15:14.604 21:18:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:14.604 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:15:14.604 21:18:03 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:15:14.604 21:18:03 -- json_config/json_config.sh@193 -- # uname -s 00:15:14.604 21:18:03 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:15:14.604 21:18:03 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:15:14.604 21:18:03 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:15:14.604 21:18:03 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:15:14.604 21:18:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:14.604 21:18:03 -- common/autotest_common.sh@10 -- # set +x 00:15:14.869 21:18:03 -- json_config/json_config.sh@323 -- # killprocess 74111 00:15:14.869 21:18:03 -- common/autotest_common.sh@936 -- # '[' -z 74111 ']' 00:15:14.869 21:18:03 -- common/autotest_common.sh@940 -- # kill -0 74111 00:15:14.869 21:18:03 -- common/autotest_common.sh@941 -- # uname 00:15:14.869 21:18:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.869 21:18:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74111 00:15:14.869 21:18:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:14.869 21:18:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:14.869 killing process with pid 74111 00:15:14.869 21:18:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74111' 00:15:14.869 21:18:03 -- common/autotest_common.sh@955 -- # kill 74111 00:15:14.869 21:18:03 -- common/autotest_common.sh@960 -- # wait 74111 00:15:15.128 21:18:04 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:15.128 21:18:04 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:15:15.128 21:18:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:15.128 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:15.128 21:18:04 -- json_config/json_config.sh@328 -- # return 0 00:15:15.128 INFO: Success 00:15:15.128 21:18:04 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:15:15.128 ************************************ 00:15:15.128 END TEST json_config 00:15:15.128 ************************************ 00:15:15.128 00:15:15.128 real 0m7.963s 00:15:15.128 user 0m11.211s 00:15:15.128 sys 0m1.902s 00:15:15.128 21:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:15.128 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:15.128 21:18:04 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:15.128 21:18:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:15.128 21:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.128 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:15.387 ************************************ 00:15:15.387 START TEST json_config_extra_key 00:15:15.387 ************************************ 00:15:15.387 21:18:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:15.387 21:18:04 -- nvmf/common.sh@7 -- # uname -s 00:15:15.387 21:18:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.387 21:18:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.387 21:18:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.387 21:18:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.387 21:18:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.387 21:18:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.387 21:18:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.387 21:18:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.387 21:18:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.387 21:18:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.387 21:18:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:15:15.387 21:18:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:15:15.387 21:18:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.387 21:18:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.387 21:18:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:15.387 21:18:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.387 21:18:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:15.387 21:18:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.387 21:18:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.387 21:18:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.387 21:18:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.387 21:18:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.387 21:18:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.387 21:18:04 -- paths/export.sh@5 -- # export PATH 00:15:15.387 21:18:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.387 21:18:04 -- nvmf/common.sh@47 -- # : 0 00:15:15.387 21:18:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.387 21:18:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.387 21:18:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.387 21:18:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.387 21:18:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.387 21:18:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.387 21:18:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.387 21:18:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:15.387 INFO: launching applications... 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:15.387 21:18:04 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:15.387 21:18:04 -- json_config/common.sh@9 -- # local app=target 00:15:15.387 21:18:04 -- json_config/common.sh@10 -- # shift 00:15:15.387 21:18:04 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:15.387 21:18:04 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:15.387 21:18:04 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:15.387 21:18:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:15.387 21:18:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:15.387 21:18:04 -- json_config/common.sh@22 -- # app_pid["$app"]=74292 00:15:15.387 21:18:04 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:15.387 Waiting for target to run... 00:15:15.387 21:18:04 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:15.387 21:18:04 -- json_config/common.sh@25 -- # waitforlisten 74292 /var/tmp/spdk_tgt.sock 00:15:15.387 21:18:04 -- common/autotest_common.sh@817 -- # '[' -z 74292 ']' 00:15:15.387 21:18:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:15.387 21:18:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:15.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:15.387 21:18:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:15.387 21:18:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:15.387 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:15:15.387 [2024-04-26 21:18:04.624784] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:15.387 [2024-04-26 21:18:04.624893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74292 ] 00:15:15.953 [2024-04-26 21:18:04.996686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.953 [2024-04-26 21:18:05.038058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.518 21:18:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:16.518 21:18:05 -- common/autotest_common.sh@850 -- # return 0 00:15:16.518 00:15:16.518 21:18:05 -- json_config/common.sh@26 -- # echo '' 00:15:16.518 INFO: shutting down applications... 00:15:16.518 21:18:05 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:16.518 21:18:05 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:16.518 21:18:05 -- json_config/common.sh@31 -- # local app=target 00:15:16.518 21:18:05 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:16.518 21:18:05 -- json_config/common.sh@35 -- # [[ -n 74292 ]] 00:15:16.518 21:18:05 -- json_config/common.sh@38 -- # kill -SIGINT 74292 00:15:16.518 21:18:05 -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:16.518 21:18:05 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:16.518 21:18:05 -- json_config/common.sh@41 -- # kill -0 74292 00:15:16.518 21:18:05 -- json_config/common.sh@45 -- # sleep 0.5 00:15:16.776 21:18:06 -- json_config/common.sh@40 -- # (( i++ )) 00:15:16.776 21:18:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:16.776 21:18:06 -- json_config/common.sh@41 -- # kill -0 74292 00:15:16.776 21:18:06 -- json_config/common.sh@45 -- # sleep 0.5 00:15:17.341 21:18:06 -- json_config/common.sh@40 -- # (( i++ )) 00:15:17.341 21:18:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:17.341 21:18:06 -- json_config/common.sh@41 -- # kill -0 74292 00:15:17.341 21:18:06 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:17.341 21:18:06 -- json_config/common.sh@43 -- # break 00:15:17.341 21:18:06 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:17.341 21:18:06 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:17.341 SPDK target shutdown done 00:15:17.341 Success 00:15:17.341 21:18:06 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:17.341 00:15:17.341 real 0m2.086s 00:15:17.341 user 0m1.606s 00:15:17.341 sys 0m0.418s 00:15:17.341 21:18:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:17.341 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:17.341 ************************************ 00:15:17.341 END TEST json_config_extra_key 00:15:17.341 ************************************ 00:15:17.341 21:18:06 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:17.341 21:18:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:17.341 21:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:17.341 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:17.599 ************************************ 00:15:17.599 START TEST alias_rpc 00:15:17.599 ************************************ 00:15:17.599 21:18:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:17.599 * Looking for test storage... 00:15:17.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:17.599 21:18:06 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:17.599 21:18:06 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=74380 00:15:17.599 21:18:06 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 74380 00:15:17.599 21:18:06 -- common/autotest_common.sh@817 -- # '[' -z 74380 ']' 00:15:17.599 21:18:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.599 21:18:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:17.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.599 21:18:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.599 21:18:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:17.599 21:18:06 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:17.599 21:18:06 -- common/autotest_common.sh@10 -- # set +x 00:15:17.599 [2024-04-26 21:18:06.792197] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:17.599 [2024-04-26 21:18:06.792290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74380 ] 00:15:17.858 [2024-04-26 21:18:06.931079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.858 [2024-04-26 21:18:07.011460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.790 21:18:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.790 21:18:07 -- common/autotest_common.sh@850 -- # return 0 00:15:18.790 21:18:07 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:18.790 21:18:07 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 74380 00:15:18.790 21:18:07 -- common/autotest_common.sh@936 -- # '[' -z 74380 ']' 00:15:18.790 21:18:07 -- common/autotest_common.sh@940 -- # kill -0 74380 00:15:18.790 21:18:07 -- common/autotest_common.sh@941 -- # uname 00:15:18.790 21:18:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:18.790 21:18:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74380 00:15:18.790 21:18:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:18.790 21:18:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:18.790 killing process with pid 74380 00:15:18.790 21:18:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74380' 00:15:18.790 21:18:07 -- common/autotest_common.sh@955 -- # kill 74380 00:15:18.790 21:18:07 -- common/autotest_common.sh@960 -- # wait 74380 00:15:19.368 00:15:19.368 real 0m1.918s 00:15:19.368 user 0m1.997s 00:15:19.368 sys 0m0.544s 00:15:19.368 21:18:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.368 21:18:08 -- common/autotest_common.sh@10 -- # set +x 00:15:19.368 ************************************ 00:15:19.368 END TEST alias_rpc 00:15:19.368 ************************************ 00:15:19.368 21:18:08 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:15:19.368 21:18:08 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:19.368 21:18:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:19.368 21:18:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.368 21:18:08 -- common/autotest_common.sh@10 -- # set +x 00:15:19.627 ************************************ 00:15:19.627 START TEST dpdk_mem_utility 00:15:19.627 ************************************ 00:15:19.627 21:18:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:19.627 * Looking for test storage... 00:15:19.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:19.627 21:18:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:19.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.627 21:18:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=74478 00:15:19.627 21:18:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 74478 00:15:19.627 21:18:08 -- common/autotest_common.sh@817 -- # '[' -z 74478 ']' 00:15:19.627 21:18:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.627 21:18:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:19.627 21:18:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.627 21:18:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:19.627 21:18:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:19.627 21:18:08 -- common/autotest_common.sh@10 -- # set +x 00:15:19.627 [2024-04-26 21:18:08.848308] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:19.627 [2024-04-26 21:18:08.848402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74478 ] 00:15:19.885 [2024-04-26 21:18:08.990361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.885 [2024-04-26 21:18:09.074663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.822 21:18:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:20.822 21:18:09 -- common/autotest_common.sh@850 -- # return 0 00:15:20.822 21:18:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:15:20.822 21:18:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:15:20.822 21:18:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:20.822 21:18:09 -- common/autotest_common.sh@10 -- # set +x 00:15:20.822 { 00:15:20.822 "filename": "/tmp/spdk_mem_dump.txt" 00:15:20.822 } 00:15:20.822 21:18:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:20.822 21:18:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:20.822 DPDK memory size 814.000000 MiB in 1 heap(s) 00:15:20.822 1 heaps totaling size 814.000000 MiB 00:15:20.822 size: 814.000000 MiB heap id: 0 00:15:20.822 end heaps---------- 00:15:20.822 8 mempools totaling size 598.116089 MiB 00:15:20.822 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:15:20.822 size: 158.602051 MiB name: PDU_data_out_Pool 00:15:20.822 size: 84.521057 MiB name: bdev_io_74478 00:15:20.822 size: 51.011292 MiB name: evtpool_74478 00:15:20.822 size: 50.003479 MiB name: msgpool_74478 00:15:20.822 size: 21.763794 MiB name: PDU_Pool 00:15:20.822 size: 19.513306 MiB name: SCSI_TASK_Pool 00:15:20.822 size: 0.026123 MiB name: Session_Pool 00:15:20.822 end mempools------- 00:15:20.822 6 memzones totaling size 4.142822 MiB 00:15:20.822 size: 1.000366 MiB name: RG_ring_0_74478 00:15:20.822 size: 1.000366 MiB name: RG_ring_1_74478 00:15:20.822 size: 1.000366 MiB name: RG_ring_4_74478 00:15:20.822 size: 1.000366 MiB name: RG_ring_5_74478 00:15:20.822 size: 0.125366 MiB name: RG_ring_2_74478 00:15:20.822 size: 0.015991 MiB name: RG_ring_3_74478 00:15:20.822 end memzones------- 00:15:20.822 21:18:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:15:20.822 heap id: 0 total size: 814.000000 MiB number of busy elements: 233 number of free elements: 15 00:15:20.822 list of free elements. size: 12.484192 MiB 00:15:20.822 element at address: 0x200000400000 with size: 1.999512 MiB 00:15:20.822 element at address: 0x200018e00000 with size: 0.999878 MiB 00:15:20.822 element at address: 0x200019000000 with size: 0.999878 MiB 00:15:20.822 element at address: 0x200003e00000 with size: 0.996277 MiB 00:15:20.822 element at address: 0x200031c00000 with size: 0.994446 MiB 00:15:20.822 element at address: 0x200013800000 with size: 0.978699 MiB 00:15:20.822 element at address: 0x200007000000 with size: 0.959839 MiB 00:15:20.822 element at address: 0x200019200000 with size: 0.936584 MiB 00:15:20.822 element at address: 0x200000200000 with size: 0.836853 MiB 00:15:20.822 element at address: 0x20001aa00000 with size: 0.570251 MiB 00:15:20.822 element at address: 0x20000b200000 with size: 0.489258 MiB 00:15:20.822 element at address: 0x200000800000 with size: 0.486877 MiB 00:15:20.822 element at address: 0x200019400000 with size: 0.485657 MiB 00:15:20.822 element at address: 0x200027e00000 with size: 0.398682 MiB 00:15:20.822 element at address: 0x200003a00000 with size: 0.351501 MiB 00:15:20.822 list of standard malloc elements. size: 199.253235 MiB 00:15:20.822 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:15:20.822 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:15:20.822 element at address: 0x200018efff80 with size: 1.000122 MiB 00:15:20.822 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:15:20.822 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:15:20.822 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:15:20.822 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:15:20.822 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:15:20.822 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:15:20.822 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:15:20.822 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:15:20.822 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:15:20.822 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:15:20.822 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:15:20.822 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:15:20.822 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:15:20.822 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200003adb300 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200003adb500 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200003affa80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200003affb40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:15:20.823 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e66100 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e661c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6cdc0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:15:20.823 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:15:20.824 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:15:20.824 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:15:20.824 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:15:20.824 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:15:20.824 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:15:20.824 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:15:20.824 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:15:20.824 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:15:20.824 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:15:20.824 list of memzone associated elements. size: 602.262573 MiB 00:15:20.824 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:15:20.824 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:15:20.824 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:15:20.824 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:15:20.824 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:15:20.824 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_74478_0 00:15:20.824 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:15:20.824 associated memzone info: size: 48.002930 MiB name: MP_evtpool_74478_0 00:15:20.824 element at address: 0x200003fff380 with size: 48.003052 MiB 00:15:20.824 associated memzone info: size: 48.002930 MiB name: MP_msgpool_74478_0 00:15:20.824 element at address: 0x2000195be940 with size: 20.255554 MiB 00:15:20.824 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:15:20.824 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:15:20.824 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:15:20.824 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:15:20.824 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_74478 00:15:20.824 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:15:20.824 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_74478 00:15:20.824 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:15:20.824 associated memzone info: size: 1.007996 MiB name: MP_evtpool_74478 00:15:20.824 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:15:20.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:15:20.824 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:15:20.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:15:20.824 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:15:20.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:15:20.824 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:15:20.824 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:15:20.824 element at address: 0x200003eff180 with size: 1.000488 MiB 00:15:20.824 associated memzone info: size: 1.000366 MiB name: RG_ring_0_74478 00:15:20.824 element at address: 0x200003affc00 with size: 1.000488 MiB 00:15:20.824 associated memzone info: size: 1.000366 MiB name: RG_ring_1_74478 00:15:20.824 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:15:20.824 associated memzone info: size: 1.000366 MiB name: RG_ring_4_74478 00:15:20.824 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:15:20.824 associated memzone info: size: 1.000366 MiB name: RG_ring_5_74478 00:15:20.824 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:15:20.824 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_74478 00:15:20.824 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:15:20.824 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:15:20.824 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:15:20.824 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:15:20.824 element at address: 0x20001947c540 with size: 0.250488 MiB 00:15:20.824 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:15:20.824 element at address: 0x200003adf880 with size: 0.125488 MiB 00:15:20.824 associated memzone info: size: 0.125366 MiB name: RG_ring_2_74478 00:15:20.824 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:15:20.824 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:15:20.824 element at address: 0x200027e66280 with size: 0.023743 MiB 00:15:20.824 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:15:20.824 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:15:20.824 associated memzone info: size: 0.015991 MiB name: RG_ring_3_74478 00:15:20.824 element at address: 0x200027e6c3c0 with size: 0.002441 MiB 00:15:20.824 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:15:20.824 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:15:20.824 associated memzone info: size: 0.000183 MiB name: MP_msgpool_74478 00:15:20.824 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:15:20.824 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_74478 00:15:20.824 element at address: 0x200027e6ce80 with size: 0.000305 MiB 00:15:20.824 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:15:20.824 21:18:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:15:20.824 21:18:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 74478 00:15:20.824 21:18:09 -- common/autotest_common.sh@936 -- # '[' -z 74478 ']' 00:15:20.824 21:18:09 -- common/autotest_common.sh@940 -- # kill -0 74478 00:15:20.824 21:18:09 -- common/autotest_common.sh@941 -- # uname 00:15:20.824 21:18:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:20.824 21:18:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74478 00:15:20.824 21:18:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:20.824 killing process with pid 74478 00:15:20.824 21:18:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:20.824 21:18:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74478' 00:15:20.824 21:18:09 -- common/autotest_common.sh@955 -- # kill 74478 00:15:20.824 21:18:09 -- common/autotest_common.sh@960 -- # wait 74478 00:15:21.390 ************************************ 00:15:21.390 END TEST dpdk_mem_utility 00:15:21.390 ************************************ 00:15:21.390 00:15:21.390 real 0m1.819s 00:15:21.390 user 0m1.802s 00:15:21.390 sys 0m0.540s 00:15:21.390 21:18:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:21.390 21:18:10 -- common/autotest_common.sh@10 -- # set +x 00:15:21.390 21:18:10 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:21.390 21:18:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:21.390 21:18:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.390 21:18:10 -- common/autotest_common.sh@10 -- # set +x 00:15:21.390 ************************************ 00:15:21.390 START TEST event 00:15:21.390 ************************************ 00:15:21.390 21:18:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:21.649 * Looking for test storage... 00:15:21.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:21.649 21:18:10 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:21.649 21:18:10 -- bdev/nbd_common.sh@6 -- # set -e 00:15:21.649 21:18:10 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:21.649 21:18:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:15:21.649 21:18:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.649 21:18:10 -- common/autotest_common.sh@10 -- # set +x 00:15:21.649 ************************************ 00:15:21.649 START TEST event_perf 00:15:21.649 ************************************ 00:15:21.649 21:18:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:21.649 Running I/O for 1 seconds...[2024-04-26 21:18:10.826882] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:21.649 [2024-04-26 21:18:10.826984] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74583 ] 00:15:21.906 [2024-04-26 21:18:10.971573] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.906 [2024-04-26 21:18:11.059105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.906 [2024-04-26 21:18:11.059307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.906 [2024-04-26 21:18:11.059411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.906 [2024-04-26 21:18:11.059615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.278 Running I/O for 1 seconds... 00:15:23.278 lcore 0: 64150 00:15:23.278 lcore 1: 64153 00:15:23.278 lcore 2: 64157 00:15:23.278 lcore 3: 64160 00:15:23.278 done. 00:15:23.278 00:15:23.278 real 0m1.366s 00:15:23.278 user 0m4.163s 00:15:23.278 sys 0m0.076s 00:15:23.278 21:18:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:23.278 21:18:12 -- common/autotest_common.sh@10 -- # set +x 00:15:23.278 ************************************ 00:15:23.278 END TEST event_perf 00:15:23.278 ************************************ 00:15:23.278 21:18:12 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:23.278 21:18:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:23.278 21:18:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:23.278 21:18:12 -- common/autotest_common.sh@10 -- # set +x 00:15:23.278 ************************************ 00:15:23.278 START TEST event_reactor 00:15:23.278 ************************************ 00:15:23.278 21:18:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:23.278 [2024-04-26 21:18:12.348976] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:23.278 [2024-04-26 21:18:12.349087] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74621 ] 00:15:23.278 [2024-04-26 21:18:12.490172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.535 [2024-04-26 21:18:12.571919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.464 test_start 00:15:24.464 oneshot 00:15:24.464 tick 100 00:15:24.464 tick 100 00:15:24.464 tick 250 00:15:24.464 tick 100 00:15:24.464 tick 100 00:15:24.464 tick 100 00:15:24.464 tick 250 00:15:24.464 tick 500 00:15:24.464 tick 100 00:15:24.464 tick 100 00:15:24.464 tick 250 00:15:24.464 tick 100 00:15:24.464 tick 100 00:15:24.464 test_end 00:15:24.464 00:15:24.464 real 0m1.355s 00:15:24.464 user 0m1.175s 00:15:24.464 sys 0m0.071s 00:15:24.464 21:18:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:24.464 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.464 ************************************ 00:15:24.464 END TEST event_reactor 00:15:24.464 ************************************ 00:15:24.721 21:18:13 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:24.721 21:18:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:24.721 21:18:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.721 21:18:13 -- common/autotest_common.sh@10 -- # set +x 00:15:24.721 ************************************ 00:15:24.721 START TEST event_reactor_perf 00:15:24.721 ************************************ 00:15:24.721 21:18:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:24.721 [2024-04-26 21:18:13.838231] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:24.721 [2024-04-26 21:18:13.838350] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74666 ] 00:15:24.978 [2024-04-26 21:18:13.979123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.978 [2024-04-26 21:18:14.059801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.349 test_start 00:15:26.349 test_end 00:15:26.349 Performance: 394915 events per second 00:15:26.349 00:15:26.349 real 0m1.360s 00:15:26.349 user 0m1.186s 00:15:26.349 sys 0m0.066s 00:15:26.349 21:18:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:26.349 21:18:15 -- common/autotest_common.sh@10 -- # set +x 00:15:26.349 ************************************ 00:15:26.349 END TEST event_reactor_perf 00:15:26.349 ************************************ 00:15:26.349 21:18:15 -- event/event.sh@49 -- # uname -s 00:15:26.349 21:18:15 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:15:26.349 21:18:15 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:26.349 21:18:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:26.349 21:18:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.349 21:18:15 -- common/autotest_common.sh@10 -- # set +x 00:15:26.349 ************************************ 00:15:26.349 START TEST event_scheduler 00:15:26.349 ************************************ 00:15:26.349 21:18:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:26.349 * Looking for test storage... 00:15:26.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:15:26.349 21:18:15 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:15:26.349 21:18:15 -- scheduler/scheduler.sh@35 -- # scheduler_pid=74727 00:15:26.349 21:18:15 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:15:26.349 21:18:15 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:15:26.349 21:18:15 -- scheduler/scheduler.sh@37 -- # waitforlisten 74727 00:15:26.349 21:18:15 -- common/autotest_common.sh@817 -- # '[' -z 74727 ']' 00:15:26.349 21:18:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.349 21:18:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:26.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.349 21:18:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.349 21:18:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:26.349 21:18:15 -- common/autotest_common.sh@10 -- # set +x 00:15:26.349 [2024-04-26 21:18:15.502505] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:26.349 [2024-04-26 21:18:15.502587] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74727 ] 00:15:26.606 [2024-04-26 21:18:15.647749] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.606 [2024-04-26 21:18:15.703883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.606 [2024-04-26 21:18:15.704074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.606 [2024-04-26 21:18:15.704198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.606 [2024-04-26 21:18:15.704200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.173 21:18:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:27.173 21:18:16 -- common/autotest_common.sh@850 -- # return 0 00:15:27.173 21:18:16 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:15:27.173 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.173 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.173 POWER: Env isn't set yet! 00:15:27.173 POWER: Attempting to initialise ACPI cpufreq power management... 00:15:27.173 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:27.173 POWER: Cannot set governor of lcore 0 to userspace 00:15:27.173 POWER: Attempting to initialise PSTAT power management... 00:15:27.173 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:27.173 POWER: Cannot set governor of lcore 0 to performance 00:15:27.173 POWER: Attempting to initialise AMD PSTATE power management... 00:15:27.173 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:27.173 POWER: Cannot set governor of lcore 0 to userspace 00:15:27.173 POWER: Attempting to initialise CPPC power management... 00:15:27.173 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:27.173 POWER: Cannot set governor of lcore 0 to userspace 00:15:27.173 POWER: Attempting to initialise VM power management... 00:15:27.173 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:15:27.173 POWER: Unable to set Power Management Environment for lcore 0 00:15:27.173 [2024-04-26 21:18:16.416411] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:15:27.173 [2024-04-26 21:18:16.416429] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:15:27.173 [2024-04-26 21:18:16.416435] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:15:27.173 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.173 21:18:16 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:15:27.173 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.173 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 [2024-04-26 21:18:16.487522] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:15:27.431 21:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:27.431 21:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 ************************************ 00:15:27.431 START TEST scheduler_create_thread 00:15:27.431 ************************************ 00:15:27.431 21:18:16 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 2 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 3 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 4 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 5 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 6 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 7 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 8 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 9 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 10 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:27.431 21:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.431 21:18:16 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:15:27.431 21:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.431 21:18:16 -- common/autotest_common.sh@10 -- # set +x 00:15:29.332 21:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:29.332 21:18:18 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:15:29.332 21:18:18 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:15:29.332 21:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:29.332 21:18:18 -- common/autotest_common.sh@10 -- # set +x 00:15:30.267 21:18:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:30.267 00:15:30.267 real 0m2.609s 00:15:30.267 user 0m0.028s 00:15:30.267 sys 0m0.008s 00:15:30.267 21:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:30.267 21:18:19 -- common/autotest_common.sh@10 -- # set +x 00:15:30.267 ************************************ 00:15:30.267 END TEST scheduler_create_thread 00:15:30.267 ************************************ 00:15:30.267 21:18:19 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:30.267 21:18:19 -- scheduler/scheduler.sh@46 -- # killprocess 74727 00:15:30.267 21:18:19 -- common/autotest_common.sh@936 -- # '[' -z 74727 ']' 00:15:30.267 21:18:19 -- common/autotest_common.sh@940 -- # kill -0 74727 00:15:30.267 21:18:19 -- common/autotest_common.sh@941 -- # uname 00:15:30.267 21:18:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:30.267 21:18:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74727 00:15:30.267 21:18:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:30.267 21:18:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:30.267 killing process with pid 74727 00:15:30.267 21:18:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74727' 00:15:30.267 21:18:19 -- common/autotest_common.sh@955 -- # kill 74727 00:15:30.267 21:18:19 -- common/autotest_common.sh@960 -- # wait 74727 00:15:30.526 [2024-04-26 21:18:19.635051] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:15:30.784 00:15:30.784 real 0m4.521s 00:15:30.784 user 0m8.460s 00:15:30.784 sys 0m0.415s 00:15:30.784 21:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:30.784 21:18:19 -- common/autotest_common.sh@10 -- # set +x 00:15:30.784 ************************************ 00:15:30.784 END TEST event_scheduler 00:15:30.784 ************************************ 00:15:30.784 21:18:19 -- event/event.sh@51 -- # modprobe -n nbd 00:15:30.784 21:18:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:15:30.784 21:18:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:30.784 21:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:30.784 21:18:19 -- common/autotest_common.sh@10 -- # set +x 00:15:30.784 ************************************ 00:15:30.784 START TEST app_repeat 00:15:30.784 ************************************ 00:15:30.784 21:18:19 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:15:30.784 21:18:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.784 21:18:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:30.784 21:18:19 -- event/event.sh@13 -- # local nbd_list 00:15:30.784 21:18:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:30.784 21:18:19 -- event/event.sh@14 -- # local bdev_list 00:15:30.784 21:18:19 -- event/event.sh@15 -- # local repeat_times=4 00:15:30.784 21:18:19 -- event/event.sh@17 -- # modprobe nbd 00:15:30.784 21:18:19 -- event/event.sh@19 -- # repeat_pid=74854 00:15:30.784 21:18:19 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:15:30.784 21:18:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:15:30.784 Process app_repeat pid: 74854 00:15:30.784 21:18:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74854' 00:15:30.784 21:18:19 -- event/event.sh@23 -- # for i in {0..2} 00:15:30.784 spdk_app_start Round 0 00:15:30.784 21:18:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:15:30.784 21:18:19 -- event/event.sh@25 -- # waitforlisten 74854 /var/tmp/spdk-nbd.sock 00:15:30.784 21:18:19 -- common/autotest_common.sh@817 -- # '[' -z 74854 ']' 00:15:30.784 21:18:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:30.784 21:18:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:30.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:30.784 21:18:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:30.784 21:18:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:30.784 21:18:19 -- common/autotest_common.sh@10 -- # set +x 00:15:30.784 [2024-04-26 21:18:20.003632] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:30.784 [2024-04-26 21:18:20.003719] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74854 ] 00:15:31.043 [2024-04-26 21:18:20.142210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:31.043 [2024-04-26 21:18:20.198373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.043 [2024-04-26 21:18:20.198383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.043 21:18:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:31.043 21:18:20 -- common/autotest_common.sh@850 -- # return 0 00:15:31.043 21:18:20 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:31.301 Malloc0 00:15:31.301 21:18:20 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:31.561 Malloc1 00:15:31.561 21:18:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@12 -- # local i 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:31.561 21:18:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:31.819 /dev/nbd0 00:15:32.076 21:18:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.076 21:18:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.076 21:18:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:32.076 21:18:21 -- common/autotest_common.sh@855 -- # local i 00:15:32.076 21:18:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:32.076 21:18:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:32.076 21:18:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:32.076 21:18:21 -- common/autotest_common.sh@859 -- # break 00:15:32.076 21:18:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:32.076 21:18:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:32.076 21:18:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:32.076 1+0 records in 00:15:32.076 1+0 records out 00:15:32.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401129 s, 10.2 MB/s 00:15:32.076 21:18:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:32.076 21:18:21 -- common/autotest_common.sh@872 -- # size=4096 00:15:32.076 21:18:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:32.076 21:18:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:32.076 21:18:21 -- common/autotest_common.sh@875 -- # return 0 00:15:32.076 21:18:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.076 21:18:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.076 21:18:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:32.076 /dev/nbd1 00:15:32.334 21:18:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:32.334 21:18:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:32.334 21:18:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:15:32.334 21:18:21 -- common/autotest_common.sh@855 -- # local i 00:15:32.334 21:18:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:32.334 21:18:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:32.334 21:18:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:15:32.334 21:18:21 -- common/autotest_common.sh@859 -- # break 00:15:32.334 21:18:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:32.334 21:18:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:32.334 21:18:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:32.334 1+0 records in 00:15:32.334 1+0 records out 00:15:32.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265832 s, 15.4 MB/s 00:15:32.334 21:18:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:32.334 21:18:21 -- common/autotest_common.sh@872 -- # size=4096 00:15:32.334 21:18:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:32.334 21:18:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:32.334 21:18:21 -- common/autotest_common.sh@875 -- # return 0 00:15:32.334 21:18:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.334 21:18:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.334 21:18:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:32.334 21:18:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.334 21:18:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:32.592 { 00:15:32.592 "bdev_name": "Malloc0", 00:15:32.592 "nbd_device": "/dev/nbd0" 00:15:32.592 }, 00:15:32.592 { 00:15:32.592 "bdev_name": "Malloc1", 00:15:32.592 "nbd_device": "/dev/nbd1" 00:15:32.592 } 00:15:32.592 ]' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:32.592 { 00:15:32.592 "bdev_name": "Malloc0", 00:15:32.592 "nbd_device": "/dev/nbd0" 00:15:32.592 }, 00:15:32.592 { 00:15:32.592 "bdev_name": "Malloc1", 00:15:32.592 "nbd_device": "/dev/nbd1" 00:15:32.592 } 00:15:32.592 ]' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:32.592 /dev/nbd1' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:32.592 /dev/nbd1' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@65 -- # count=2 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@95 -- # count=2 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:32.592 256+0 records in 00:15:32.592 256+0 records out 00:15:32.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00545823 s, 192 MB/s 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:32.592 256+0 records in 00:15:32.592 256+0 records out 00:15:32.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164748 s, 63.6 MB/s 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:32.592 256+0 records in 00:15:32.592 256+0 records out 00:15:32.592 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284657 s, 36.8 MB/s 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@51 -- # local i 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.592 21:18:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:32.849 21:18:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.849 21:18:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.850 21:18:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.850 21:18:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.850 21:18:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.850 21:18:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.850 21:18:22 -- bdev/nbd_common.sh@41 -- # break 00:15:32.850 21:18:22 -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.850 21:18:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.850 21:18:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@41 -- # break 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.108 21:18:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@65 -- # true 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@65 -- # count=0 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@104 -- # count=0 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:33.365 21:18:22 -- bdev/nbd_common.sh@109 -- # return 0 00:15:33.365 21:18:22 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:33.623 21:18:22 -- event/event.sh@35 -- # sleep 3 00:15:33.882 [2024-04-26 21:18:23.114544] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:34.141 [2024-04-26 21:18:23.195727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.141 [2024-04-26 21:18:23.195728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.141 [2024-04-26 21:18:23.273494] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:34.141 [2024-04-26 21:18:23.273560] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:36.687 21:18:25 -- event/event.sh@23 -- # for i in {0..2} 00:15:36.687 spdk_app_start Round 1 00:15:36.687 21:18:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:15:36.687 21:18:25 -- event/event.sh@25 -- # waitforlisten 74854 /var/tmp/spdk-nbd.sock 00:15:36.687 21:18:25 -- common/autotest_common.sh@817 -- # '[' -z 74854 ']' 00:15:36.687 21:18:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:36.687 21:18:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:36.687 21:18:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:36.687 21:18:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.687 21:18:25 -- common/autotest_common.sh@10 -- # set +x 00:15:36.947 21:18:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:36.947 21:18:26 -- common/autotest_common.sh@850 -- # return 0 00:15:36.947 21:18:26 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:37.206 Malloc0 00:15:37.206 21:18:26 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:37.464 Malloc1 00:15:37.465 21:18:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@12 -- # local i 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.465 21:18:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:37.723 /dev/nbd0 00:15:37.723 21:18:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.723 21:18:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.723 21:18:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:37.723 21:18:26 -- common/autotest_common.sh@855 -- # local i 00:15:37.723 21:18:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:37.723 21:18:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:37.723 21:18:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:37.723 21:18:26 -- common/autotest_common.sh@859 -- # break 00:15:37.723 21:18:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:37.723 21:18:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:37.723 21:18:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:37.723 1+0 records in 00:15:37.723 1+0 records out 00:15:37.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361213 s, 11.3 MB/s 00:15:37.723 21:18:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:37.723 21:18:26 -- common/autotest_common.sh@872 -- # size=4096 00:15:37.723 21:18:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:37.723 21:18:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:37.723 21:18:26 -- common/autotest_common.sh@875 -- # return 0 00:15:37.723 21:18:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.723 21:18:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.723 21:18:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:37.982 /dev/nbd1 00:15:37.982 21:18:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:37.982 21:18:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:37.982 21:18:27 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:15:37.982 21:18:27 -- common/autotest_common.sh@855 -- # local i 00:15:37.982 21:18:27 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:37.982 21:18:27 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:37.982 21:18:27 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:15:37.982 21:18:27 -- common/autotest_common.sh@859 -- # break 00:15:37.982 21:18:27 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:37.982 21:18:27 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:37.982 21:18:27 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:37.982 1+0 records in 00:15:37.982 1+0 records out 00:15:37.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491274 s, 8.3 MB/s 00:15:37.982 21:18:27 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:37.982 21:18:27 -- common/autotest_common.sh@872 -- # size=4096 00:15:37.982 21:18:27 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:37.982 21:18:27 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:37.982 21:18:27 -- common/autotest_common.sh@875 -- # return 0 00:15:37.982 21:18:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.982 21:18:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:37.982 21:18:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:37.982 21:18:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.982 21:18:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:38.275 { 00:15:38.275 "bdev_name": "Malloc0", 00:15:38.275 "nbd_device": "/dev/nbd0" 00:15:38.275 }, 00:15:38.275 { 00:15:38.275 "bdev_name": "Malloc1", 00:15:38.275 "nbd_device": "/dev/nbd1" 00:15:38.275 } 00:15:38.275 ]' 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:38.275 { 00:15:38.275 "bdev_name": "Malloc0", 00:15:38.275 "nbd_device": "/dev/nbd0" 00:15:38.275 }, 00:15:38.275 { 00:15:38.275 "bdev_name": "Malloc1", 00:15:38.275 "nbd_device": "/dev/nbd1" 00:15:38.275 } 00:15:38.275 ]' 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:38.275 /dev/nbd1' 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:38.275 /dev/nbd1' 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@65 -- # count=2 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@95 -- # count=2 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:38.275 256+0 records in 00:15:38.275 256+0 records out 00:15:38.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136521 s, 76.8 MB/s 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:38.275 256+0 records in 00:15:38.275 256+0 records out 00:15:38.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229005 s, 45.8 MB/s 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.275 21:18:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:38.559 256+0 records in 00:15:38.559 256+0 records out 00:15:38.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249598 s, 42.0 MB/s 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@51 -- # local i 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@41 -- # break 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.559 21:18:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@41 -- # break 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.817 21:18:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@65 -- # true 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@65 -- # count=0 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@104 -- # count=0 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:39.075 21:18:28 -- bdev/nbd_common.sh@109 -- # return 0 00:15:39.075 21:18:28 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:39.333 21:18:28 -- event/event.sh@35 -- # sleep 3 00:15:39.592 [2024-04-26 21:18:28.672640] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:39.592 [2024-04-26 21:18:28.720956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.592 [2024-04-26 21:18:28.720956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.592 [2024-04-26 21:18:28.762999] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:39.592 [2024-04-26 21:18:28.763051] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:42.892 21:18:31 -- event/event.sh@23 -- # for i in {0..2} 00:15:42.892 spdk_app_start Round 2 00:15:42.892 21:18:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:15:42.892 21:18:31 -- event/event.sh@25 -- # waitforlisten 74854 /var/tmp/spdk-nbd.sock 00:15:42.892 21:18:31 -- common/autotest_common.sh@817 -- # '[' -z 74854 ']' 00:15:42.892 21:18:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:42.892 21:18:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:42.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:42.892 21:18:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:42.892 21:18:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:42.892 21:18:31 -- common/autotest_common.sh@10 -- # set +x 00:15:42.892 21:18:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:42.892 21:18:31 -- common/autotest_common.sh@850 -- # return 0 00:15:42.892 21:18:31 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:42.892 Malloc0 00:15:42.892 21:18:31 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:42.892 Malloc1 00:15:43.151 21:18:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@12 -- # local i 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:43.151 /dev/nbd0 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:43.151 21:18:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:43.151 21:18:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:15:43.151 21:18:32 -- common/autotest_common.sh@855 -- # local i 00:15:43.151 21:18:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:43.151 21:18:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:43.151 21:18:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:15:43.410 21:18:32 -- common/autotest_common.sh@859 -- # break 00:15:43.410 21:18:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:43.410 21:18:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:43.410 21:18:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:43.410 1+0 records in 00:15:43.410 1+0 records out 00:15:43.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538372 s, 7.6 MB/s 00:15:43.410 21:18:32 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:43.410 21:18:32 -- common/autotest_common.sh@872 -- # size=4096 00:15:43.410 21:18:32 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:43.410 21:18:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:43.410 21:18:32 -- common/autotest_common.sh@875 -- # return 0 00:15:43.410 21:18:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.410 21:18:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.410 21:18:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:43.410 /dev/nbd1 00:15:43.410 21:18:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:43.410 21:18:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:43.410 21:18:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:15:43.410 21:18:32 -- common/autotest_common.sh@855 -- # local i 00:15:43.410 21:18:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:15:43.410 21:18:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:15:43.410 21:18:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:15:43.410 21:18:32 -- common/autotest_common.sh@859 -- # break 00:15:43.410 21:18:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:15:43.410 21:18:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:15:43.410 21:18:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:43.410 1+0 records in 00:15:43.410 1+0 records out 00:15:43.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521577 s, 7.9 MB/s 00:15:43.668 21:18:32 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:43.668 21:18:32 -- common/autotest_common.sh@872 -- # size=4096 00:15:43.668 21:18:32 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:43.668 21:18:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:15:43.668 21:18:32 -- common/autotest_common.sh@875 -- # return 0 00:15:43.668 21:18:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:43.668 21:18:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.668 21:18:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:43.668 21:18:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.668 21:18:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:43.668 21:18:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:43.668 { 00:15:43.668 "bdev_name": "Malloc0", 00:15:43.668 "nbd_device": "/dev/nbd0" 00:15:43.668 }, 00:15:43.668 { 00:15:43.668 "bdev_name": "Malloc1", 00:15:43.668 "nbd_device": "/dev/nbd1" 00:15:43.668 } 00:15:43.668 ]' 00:15:43.668 21:18:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:43.668 { 00:15:43.668 "bdev_name": "Malloc0", 00:15:43.668 "nbd_device": "/dev/nbd0" 00:15:43.668 }, 00:15:43.668 { 00:15:43.668 "bdev_name": "Malloc1", 00:15:43.668 "nbd_device": "/dev/nbd1" 00:15:43.669 } 00:15:43.669 ]' 00:15:43.669 21:18:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:43.927 /dev/nbd1' 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:43.927 /dev/nbd1' 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@65 -- # count=2 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@66 -- # echo 2 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@95 -- # count=2 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:43.927 256+0 records in 00:15:43.927 256+0 records out 00:15:43.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138394 s, 75.8 MB/s 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:43.927 256+0 records in 00:15:43.927 256+0 records out 00:15:43.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221533 s, 47.3 MB/s 00:15:43.927 21:18:32 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:43.927 256+0 records in 00:15:43.927 256+0 records out 00:15:43.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239974 s, 43.7 MB/s 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@51 -- # local i 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:43.927 21:18:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@41 -- # break 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.186 21:18:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@41 -- # break 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.444 21:18:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@65 -- # true 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@65 -- # count=0 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@104 -- # count=0 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:44.702 21:18:33 -- bdev/nbd_common.sh@109 -- # return 0 00:15:44.702 21:18:33 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:44.961 21:18:34 -- event/event.sh@35 -- # sleep 3 00:15:44.961 [2024-04-26 21:18:34.152738] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:44.961 [2024-04-26 21:18:34.202070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.961 [2024-04-26 21:18:34.202070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.220 [2024-04-26 21:18:34.244563] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:45.220 [2024-04-26 21:18:34.244617] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:47.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:47.756 21:18:37 -- event/event.sh@38 -- # waitforlisten 74854 /var/tmp/spdk-nbd.sock 00:15:47.756 21:18:37 -- common/autotest_common.sh@817 -- # '[' -z 74854 ']' 00:15:47.756 21:18:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:47.756 21:18:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:47.756 21:18:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:47.756 21:18:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:47.756 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:15:48.014 21:18:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:48.014 21:18:37 -- common/autotest_common.sh@850 -- # return 0 00:15:48.014 21:18:37 -- event/event.sh@39 -- # killprocess 74854 00:15:48.014 21:18:37 -- common/autotest_common.sh@936 -- # '[' -z 74854 ']' 00:15:48.014 21:18:37 -- common/autotest_common.sh@940 -- # kill -0 74854 00:15:48.014 21:18:37 -- common/autotest_common.sh@941 -- # uname 00:15:48.014 21:18:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.014 21:18:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74854 00:15:48.014 killing process with pid 74854 00:15:48.014 21:18:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.014 21:18:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.014 21:18:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74854' 00:15:48.014 21:18:37 -- common/autotest_common.sh@955 -- # kill 74854 00:15:48.014 21:18:37 -- common/autotest_common.sh@960 -- # wait 74854 00:15:48.280 spdk_app_start is called in Round 0. 00:15:48.280 Shutdown signal received, stop current app iteration 00:15:48.280 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:15:48.280 spdk_app_start is called in Round 1. 00:15:48.280 Shutdown signal received, stop current app iteration 00:15:48.280 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:15:48.280 spdk_app_start is called in Round 2. 00:15:48.280 Shutdown signal received, stop current app iteration 00:15:48.280 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:15:48.280 spdk_app_start is called in Round 3. 00:15:48.280 Shutdown signal received, stop current app iteration 00:15:48.280 21:18:37 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:15:48.280 21:18:37 -- event/event.sh@42 -- # return 0 00:15:48.280 00:15:48.280 real 0m17.454s 00:15:48.280 user 0m38.615s 00:15:48.280 sys 0m2.971s 00:15:48.280 ************************************ 00:15:48.280 END TEST app_repeat 00:15:48.280 ************************************ 00:15:48.280 21:18:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:48.280 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:15:48.280 21:18:37 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:15:48.280 21:18:37 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:48.280 21:18:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:48.280 21:18:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.280 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:15:48.550 ************************************ 00:15:48.550 START TEST cpu_locks 00:15:48.550 ************************************ 00:15:48.550 21:18:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:48.550 * Looking for test storage... 00:15:48.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:48.550 21:18:37 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:15:48.550 21:18:37 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:15:48.550 21:18:37 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:15:48.550 21:18:37 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:15:48.550 21:18:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:48.550 21:18:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.550 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:15:48.550 ************************************ 00:15:48.550 START TEST default_locks 00:15:48.550 ************************************ 00:15:48.550 21:18:37 -- common/autotest_common.sh@1111 -- # default_locks 00:15:48.550 21:18:37 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=75468 00:15:48.550 21:18:37 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:48.550 21:18:37 -- event/cpu_locks.sh@47 -- # waitforlisten 75468 00:15:48.550 21:18:37 -- common/autotest_common.sh@817 -- # '[' -z 75468 ']' 00:15:48.550 21:18:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.550 21:18:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:48.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.550 21:18:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.550 21:18:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:48.550 21:18:37 -- common/autotest_common.sh@10 -- # set +x 00:15:48.810 [2024-04-26 21:18:37.834026] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:48.810 [2024-04-26 21:18:37.834096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75468 ] 00:15:48.810 [2024-04-26 21:18:37.972214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.810 [2024-04-26 21:18:38.023905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.746 21:18:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.746 21:18:38 -- common/autotest_common.sh@850 -- # return 0 00:15:49.746 21:18:38 -- event/cpu_locks.sh@49 -- # locks_exist 75468 00:15:49.746 21:18:38 -- event/cpu_locks.sh@22 -- # lslocks -p 75468 00:15:49.746 21:18:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:49.746 21:18:38 -- event/cpu_locks.sh@50 -- # killprocess 75468 00:15:49.746 21:18:38 -- common/autotest_common.sh@936 -- # '[' -z 75468 ']' 00:15:49.746 21:18:38 -- common/autotest_common.sh@940 -- # kill -0 75468 00:15:49.746 21:18:38 -- common/autotest_common.sh@941 -- # uname 00:15:49.746 21:18:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:49.746 21:18:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75468 00:15:50.004 21:18:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:50.004 21:18:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:50.004 killing process with pid 75468 00:15:50.004 21:18:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75468' 00:15:50.004 21:18:39 -- common/autotest_common.sh@955 -- # kill 75468 00:15:50.004 21:18:39 -- common/autotest_common.sh@960 -- # wait 75468 00:15:50.263 21:18:39 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 75468 00:15:50.263 21:18:39 -- common/autotest_common.sh@638 -- # local es=0 00:15:50.263 21:18:39 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 75468 00:15:50.263 21:18:39 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:15:50.263 21:18:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:50.263 21:18:39 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:15:50.263 21:18:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:50.263 21:18:39 -- common/autotest_common.sh@641 -- # waitforlisten 75468 00:15:50.263 21:18:39 -- common/autotest_common.sh@817 -- # '[' -z 75468 ']' 00:15:50.263 21:18:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.263 21:18:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:50.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.264 21:18:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.264 21:18:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:50.264 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:15:50.264 ERROR: process (pid: 75468) is no longer running 00:15:50.264 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (75468) - No such process 00:15:50.264 21:18:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:50.264 21:18:39 -- common/autotest_common.sh@850 -- # return 1 00:15:50.264 21:18:39 -- common/autotest_common.sh@641 -- # es=1 00:15:50.264 21:18:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:50.264 21:18:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:50.264 21:18:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:50.264 21:18:39 -- event/cpu_locks.sh@54 -- # no_locks 00:15:50.264 21:18:39 -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:50.264 21:18:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:15:50.264 21:18:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:50.264 00:15:50.264 real 0m1.564s 00:15:50.264 user 0m1.642s 00:15:50.264 sys 0m0.454s 00:15:50.264 21:18:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:50.264 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:15:50.264 ************************************ 00:15:50.264 END TEST default_locks 00:15:50.264 ************************************ 00:15:50.264 21:18:39 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:15:50.264 21:18:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:50.264 21:18:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.264 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:15:50.264 ************************************ 00:15:50.264 START TEST default_locks_via_rpc 00:15:50.264 ************************************ 00:15:50.264 21:18:39 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:15:50.264 21:18:39 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=75536 00:15:50.264 21:18:39 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:50.264 21:18:39 -- event/cpu_locks.sh@63 -- # waitforlisten 75536 00:15:50.264 21:18:39 -- common/autotest_common.sh@817 -- # '[' -z 75536 ']' 00:15:50.264 21:18:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.264 21:18:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:50.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.264 21:18:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.264 21:18:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:50.264 21:18:39 -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 [2024-04-26 21:18:39.529637] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:50.523 [2024-04-26 21:18:39.529691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75536 ] 00:15:50.523 [2024-04-26 21:18:39.667204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.523 [2024-04-26 21:18:39.718147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.456 21:18:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:51.456 21:18:40 -- common/autotest_common.sh@850 -- # return 0 00:15:51.457 21:18:40 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:15:51.457 21:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.457 21:18:40 -- common/autotest_common.sh@10 -- # set +x 00:15:51.457 21:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.457 21:18:40 -- event/cpu_locks.sh@67 -- # no_locks 00:15:51.457 21:18:40 -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:51.457 21:18:40 -- event/cpu_locks.sh@26 -- # local lock_files 00:15:51.457 21:18:40 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:51.457 21:18:40 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:15:51.457 21:18:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:51.457 21:18:40 -- common/autotest_common.sh@10 -- # set +x 00:15:51.457 21:18:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:51.457 21:18:40 -- event/cpu_locks.sh@71 -- # locks_exist 75536 00:15:51.457 21:18:40 -- event/cpu_locks.sh@22 -- # lslocks -p 75536 00:15:51.457 21:18:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:51.716 21:18:40 -- event/cpu_locks.sh@73 -- # killprocess 75536 00:15:51.716 21:18:40 -- common/autotest_common.sh@936 -- # '[' -z 75536 ']' 00:15:51.716 21:18:40 -- common/autotest_common.sh@940 -- # kill -0 75536 00:15:51.716 21:18:40 -- common/autotest_common.sh@941 -- # uname 00:15:51.716 21:18:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:51.716 21:18:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75536 00:15:51.716 21:18:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:51.716 21:18:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:51.716 killing process with pid 75536 00:15:51.716 21:18:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75536' 00:15:51.716 21:18:40 -- common/autotest_common.sh@955 -- # kill 75536 00:15:51.716 21:18:40 -- common/autotest_common.sh@960 -- # wait 75536 00:15:51.975 00:15:51.975 real 0m1.696s 00:15:51.975 user 0m1.765s 00:15:51.975 sys 0m0.520s 00:15:51.975 21:18:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:51.975 21:18:41 -- common/autotest_common.sh@10 -- # set +x 00:15:51.975 ************************************ 00:15:51.975 END TEST default_locks_via_rpc 00:15:51.975 ************************************ 00:15:51.975 21:18:41 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:15:51.975 21:18:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:51.975 21:18:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.975 21:18:41 -- common/autotest_common.sh@10 -- # set +x 00:15:52.239 ************************************ 00:15:52.239 START TEST non_locking_app_on_locked_coremask 00:15:52.239 ************************************ 00:15:52.239 21:18:41 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:15:52.239 21:18:41 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75610 00:15:52.239 21:18:41 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:52.239 21:18:41 -- event/cpu_locks.sh@81 -- # waitforlisten 75610 /var/tmp/spdk.sock 00:15:52.239 21:18:41 -- common/autotest_common.sh@817 -- # '[' -z 75610 ']' 00:15:52.239 21:18:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.239 21:18:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:52.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.239 21:18:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.239 21:18:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:52.239 21:18:41 -- common/autotest_common.sh@10 -- # set +x 00:15:52.239 [2024-04-26 21:18:41.375781] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:52.239 [2024-04-26 21:18:41.375845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75610 ] 00:15:52.505 [2024-04-26 21:18:41.512520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.505 [2024-04-26 21:18:41.563135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.182 21:18:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:53.183 21:18:42 -- common/autotest_common.sh@850 -- # return 0 00:15:53.183 21:18:42 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:15:53.183 21:18:42 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75637 00:15:53.183 21:18:42 -- event/cpu_locks.sh@85 -- # waitforlisten 75637 /var/tmp/spdk2.sock 00:15:53.183 21:18:42 -- common/autotest_common.sh@817 -- # '[' -z 75637 ']' 00:15:53.183 21:18:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:53.183 21:18:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:53.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:53.183 21:18:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:53.183 21:18:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:53.183 21:18:42 -- common/autotest_common.sh@10 -- # set +x 00:15:53.183 [2024-04-26 21:18:42.302462] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:53.183 [2024-04-26 21:18:42.302530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75637 ] 00:15:53.183 [2024-04-26 21:18:42.434077] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:53.183 [2024-04-26 21:18:42.434115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.441 [2024-04-26 21:18:42.541288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.009 21:18:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:54.009 21:18:43 -- common/autotest_common.sh@850 -- # return 0 00:15:54.009 21:18:43 -- event/cpu_locks.sh@87 -- # locks_exist 75610 00:15:54.009 21:18:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:54.009 21:18:43 -- event/cpu_locks.sh@22 -- # lslocks -p 75610 00:15:54.577 21:18:43 -- event/cpu_locks.sh@89 -- # killprocess 75610 00:15:54.577 21:18:43 -- common/autotest_common.sh@936 -- # '[' -z 75610 ']' 00:15:54.577 21:18:43 -- common/autotest_common.sh@940 -- # kill -0 75610 00:15:54.577 21:18:43 -- common/autotest_common.sh@941 -- # uname 00:15:54.577 21:18:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:54.577 21:18:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75610 00:15:54.577 21:18:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:54.577 21:18:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:54.577 killing process with pid 75610 00:15:54.577 21:18:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75610' 00:15:54.577 21:18:43 -- common/autotest_common.sh@955 -- # kill 75610 00:15:54.577 21:18:43 -- common/autotest_common.sh@960 -- # wait 75610 00:15:55.145 21:18:44 -- event/cpu_locks.sh@90 -- # killprocess 75637 00:15:55.145 21:18:44 -- common/autotest_common.sh@936 -- # '[' -z 75637 ']' 00:15:55.145 21:18:44 -- common/autotest_common.sh@940 -- # kill -0 75637 00:15:55.145 21:18:44 -- common/autotest_common.sh@941 -- # uname 00:15:55.145 21:18:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.145 21:18:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75637 00:15:55.145 21:18:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.145 21:18:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.145 killing process with pid 75637 00:15:55.145 21:18:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75637' 00:15:55.145 21:18:44 -- common/autotest_common.sh@955 -- # kill 75637 00:15:55.145 21:18:44 -- common/autotest_common.sh@960 -- # wait 75637 00:15:55.404 00:15:55.404 real 0m3.202s 00:15:55.404 user 0m3.506s 00:15:55.404 sys 0m0.853s 00:15:55.404 21:18:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:55.404 21:18:44 -- common/autotest_common.sh@10 -- # set +x 00:15:55.404 ************************************ 00:15:55.404 END TEST non_locking_app_on_locked_coremask 00:15:55.404 ************************************ 00:15:55.404 21:18:44 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:15:55.404 21:18:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:55.404 21:18:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:55.404 21:18:44 -- common/autotest_common.sh@10 -- # set +x 00:15:55.663 ************************************ 00:15:55.663 START TEST locking_app_on_unlocked_coremask 00:15:55.663 ************************************ 00:15:55.663 21:18:44 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:15:55.663 21:18:44 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75711 00:15:55.663 21:18:44 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:15:55.663 21:18:44 -- event/cpu_locks.sh@99 -- # waitforlisten 75711 /var/tmp/spdk.sock 00:15:55.663 21:18:44 -- common/autotest_common.sh@817 -- # '[' -z 75711 ']' 00:15:55.663 21:18:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.663 21:18:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:55.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.663 21:18:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.663 21:18:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:55.663 21:18:44 -- common/autotest_common.sh@10 -- # set +x 00:15:55.663 [2024-04-26 21:18:44.716736] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:55.663 [2024-04-26 21:18:44.716814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75711 ] 00:15:55.663 [2024-04-26 21:18:44.842020] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:55.663 [2024-04-26 21:18:44.842060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.663 [2024-04-26 21:18:44.892196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.599 21:18:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:56.599 21:18:45 -- common/autotest_common.sh@850 -- # return 0 00:15:56.599 21:18:45 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75739 00:15:56.599 21:18:45 -- event/cpu_locks.sh@103 -- # waitforlisten 75739 /var/tmp/spdk2.sock 00:15:56.599 21:18:45 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:56.599 21:18:45 -- common/autotest_common.sh@817 -- # '[' -z 75739 ']' 00:15:56.599 21:18:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:56.599 21:18:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:56.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:56.599 21:18:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:56.599 21:18:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:56.599 21:18:45 -- common/autotest_common.sh@10 -- # set +x 00:15:56.599 [2024-04-26 21:18:45.724554] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:56.599 [2024-04-26 21:18:45.724629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75739 ] 00:15:56.859 [2024-04-26 21:18:45.859046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.859 [2024-04-26 21:18:45.962459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.428 21:18:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:57.428 21:18:46 -- common/autotest_common.sh@850 -- # return 0 00:15:57.428 21:18:46 -- event/cpu_locks.sh@105 -- # locks_exist 75739 00:15:57.428 21:18:46 -- event/cpu_locks.sh@22 -- # lslocks -p 75739 00:15:57.428 21:18:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:57.996 21:18:47 -- event/cpu_locks.sh@107 -- # killprocess 75711 00:15:57.996 21:18:47 -- common/autotest_common.sh@936 -- # '[' -z 75711 ']' 00:15:57.996 21:18:47 -- common/autotest_common.sh@940 -- # kill -0 75711 00:15:57.996 21:18:47 -- common/autotest_common.sh@941 -- # uname 00:15:57.996 21:18:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:57.996 21:18:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75711 00:15:57.996 21:18:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:57.996 21:18:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:57.996 killing process with pid 75711 00:15:57.996 21:18:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75711' 00:15:57.996 21:18:47 -- common/autotest_common.sh@955 -- # kill 75711 00:15:57.996 21:18:47 -- common/autotest_common.sh@960 -- # wait 75711 00:15:58.935 21:18:47 -- event/cpu_locks.sh@108 -- # killprocess 75739 00:15:58.935 21:18:47 -- common/autotest_common.sh@936 -- # '[' -z 75739 ']' 00:15:58.935 21:18:47 -- common/autotest_common.sh@940 -- # kill -0 75739 00:15:58.935 21:18:47 -- common/autotest_common.sh@941 -- # uname 00:15:58.935 21:18:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.935 21:18:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75739 00:15:58.935 21:18:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:58.935 21:18:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:58.935 killing process with pid 75739 00:15:58.935 21:18:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75739' 00:15:58.935 21:18:47 -- common/autotest_common.sh@955 -- # kill 75739 00:15:58.935 21:18:47 -- common/autotest_common.sh@960 -- # wait 75739 00:15:58.935 00:15:58.935 real 0m3.526s 00:15:58.935 user 0m3.909s 00:15:58.935 sys 0m0.968s 00:15:58.935 21:18:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:58.935 21:18:48 -- common/autotest_common.sh@10 -- # set +x 00:15:58.935 ************************************ 00:15:58.935 END TEST locking_app_on_unlocked_coremask 00:15:58.935 ************************************ 00:15:59.193 21:18:48 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:15:59.193 21:18:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:59.193 21:18:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:59.193 21:18:48 -- common/autotest_common.sh@10 -- # set +x 00:15:59.193 ************************************ 00:15:59.193 START TEST locking_app_on_locked_coremask 00:15:59.193 ************************************ 00:15:59.193 21:18:48 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:15:59.193 21:18:48 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75818 00:15:59.193 21:18:48 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:59.193 21:18:48 -- event/cpu_locks.sh@116 -- # waitforlisten 75818 /var/tmp/spdk.sock 00:15:59.193 21:18:48 -- common/autotest_common.sh@817 -- # '[' -z 75818 ']' 00:15:59.193 21:18:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.193 21:18:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:59.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.194 21:18:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.194 21:18:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:59.194 21:18:48 -- common/autotest_common.sh@10 -- # set +x 00:15:59.194 [2024-04-26 21:18:48.390902] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:59.194 [2024-04-26 21:18:48.390984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75818 ] 00:15:59.452 [2024-04-26 21:18:48.531561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.452 [2024-04-26 21:18:48.583981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.388 21:18:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:00.388 21:18:49 -- common/autotest_common.sh@850 -- # return 0 00:16:00.388 21:18:49 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75846 00:16:00.388 21:18:49 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75846 /var/tmp/spdk2.sock 00:16:00.388 21:18:49 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:00.388 21:18:49 -- common/autotest_common.sh@638 -- # local es=0 00:16:00.388 21:18:49 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 75846 /var/tmp/spdk2.sock 00:16:00.388 21:18:49 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:16:00.388 21:18:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:00.388 21:18:49 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:16:00.388 21:18:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:00.388 21:18:49 -- common/autotest_common.sh@641 -- # waitforlisten 75846 /var/tmp/spdk2.sock 00:16:00.389 21:18:49 -- common/autotest_common.sh@817 -- # '[' -z 75846 ']' 00:16:00.389 21:18:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:00.389 21:18:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:00.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:00.389 21:18:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:00.389 21:18:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:00.389 21:18:49 -- common/autotest_common.sh@10 -- # set +x 00:16:00.389 [2024-04-26 21:18:49.384715] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:00.389 [2024-04-26 21:18:49.384793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75846 ] 00:16:00.389 [2024-04-26 21:18:49.519359] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75818 has claimed it. 00:16:00.389 [2024-04-26 21:18:49.519428] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:00.955 ERROR: process (pid: 75846) is no longer running 00:16:00.955 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (75846) - No such process 00:16:00.955 21:18:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:00.955 21:18:50 -- common/autotest_common.sh@850 -- # return 1 00:16:00.955 21:18:50 -- common/autotest_common.sh@641 -- # es=1 00:16:00.955 21:18:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:00.955 21:18:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:00.955 21:18:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:00.955 21:18:50 -- event/cpu_locks.sh@122 -- # locks_exist 75818 00:16:00.955 21:18:50 -- event/cpu_locks.sh@22 -- # lslocks -p 75818 00:16:00.955 21:18:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:01.213 21:18:50 -- event/cpu_locks.sh@124 -- # killprocess 75818 00:16:01.213 21:18:50 -- common/autotest_common.sh@936 -- # '[' -z 75818 ']' 00:16:01.213 21:18:50 -- common/autotest_common.sh@940 -- # kill -0 75818 00:16:01.213 21:18:50 -- common/autotest_common.sh@941 -- # uname 00:16:01.213 21:18:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.213 21:18:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75818 00:16:01.213 21:18:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:01.213 21:18:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:01.213 killing process with pid 75818 00:16:01.213 21:18:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75818' 00:16:01.213 21:18:50 -- common/autotest_common.sh@955 -- # kill 75818 00:16:01.213 21:18:50 -- common/autotest_common.sh@960 -- # wait 75818 00:16:01.779 00:16:01.779 real 0m2.440s 00:16:01.779 user 0m2.781s 00:16:01.779 sys 0m0.597s 00:16:01.779 21:18:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:01.779 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:01.779 ************************************ 00:16:01.779 END TEST locking_app_on_locked_coremask 00:16:01.779 ************************************ 00:16:01.779 21:18:50 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:16:01.779 21:18:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:01.779 21:18:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.779 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:01.779 ************************************ 00:16:01.779 START TEST locking_overlapped_coremask 00:16:01.779 ************************************ 00:16:01.779 21:18:50 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:16:01.779 21:18:50 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75901 00:16:01.779 21:18:50 -- event/cpu_locks.sh@133 -- # waitforlisten 75901 /var/tmp/spdk.sock 00:16:01.779 21:18:50 -- common/autotest_common.sh@817 -- # '[' -z 75901 ']' 00:16:01.779 21:18:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.779 21:18:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:01.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.779 21:18:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.779 21:18:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:01.779 21:18:50 -- common/autotest_common.sh@10 -- # set +x 00:16:01.779 21:18:50 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:01.779 [2024-04-26 21:18:50.986676] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:01.779 [2024-04-26 21:18:50.986774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75901 ] 00:16:02.037 [2024-04-26 21:18:51.134722] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:02.037 [2024-04-26 21:18:51.185144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.037 [2024-04-26 21:18:51.185338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.037 [2024-04-26 21:18:51.185374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.973 21:18:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:02.973 21:18:51 -- common/autotest_common.sh@850 -- # return 0 00:16:02.973 21:18:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75931 00:16:02.973 21:18:51 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:16:02.973 21:18:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75931 /var/tmp/spdk2.sock 00:16:02.973 21:18:51 -- common/autotest_common.sh@638 -- # local es=0 00:16:02.973 21:18:51 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 75931 /var/tmp/spdk2.sock 00:16:02.973 21:18:51 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:16:02.973 21:18:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.973 21:18:51 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:16:02.973 21:18:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:02.973 21:18:51 -- common/autotest_common.sh@641 -- # waitforlisten 75931 /var/tmp/spdk2.sock 00:16:02.973 21:18:51 -- common/autotest_common.sh@817 -- # '[' -z 75931 ']' 00:16:02.973 21:18:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:02.973 21:18:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:02.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:02.973 21:18:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:02.973 21:18:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:02.973 21:18:51 -- common/autotest_common.sh@10 -- # set +x 00:16:02.973 [2024-04-26 21:18:51.948457] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:02.973 [2024-04-26 21:18:51.948552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75931 ] 00:16:02.973 [2024-04-26 21:18:52.085459] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75901 has claimed it. 00:16:02.973 [2024-04-26 21:18:52.085518] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:03.544 ERROR: process (pid: 75931) is no longer running 00:16:03.544 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (75931) - No such process 00:16:03.544 21:18:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:03.544 21:18:52 -- common/autotest_common.sh@850 -- # return 1 00:16:03.544 21:18:52 -- common/autotest_common.sh@641 -- # es=1 00:16:03.544 21:18:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:03.544 21:18:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:03.544 21:18:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:03.544 21:18:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:16:03.544 21:18:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:03.544 21:18:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:03.544 21:18:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:03.544 21:18:52 -- event/cpu_locks.sh@141 -- # killprocess 75901 00:16:03.544 21:18:52 -- common/autotest_common.sh@936 -- # '[' -z 75901 ']' 00:16:03.544 21:18:52 -- common/autotest_common.sh@940 -- # kill -0 75901 00:16:03.544 21:18:52 -- common/autotest_common.sh@941 -- # uname 00:16:03.544 21:18:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:03.544 21:18:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75901 00:16:03.545 21:18:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:03.545 21:18:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:03.545 killing process with pid 75901 00:16:03.545 21:18:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75901' 00:16:03.545 21:18:52 -- common/autotest_common.sh@955 -- # kill 75901 00:16:03.545 21:18:52 -- common/autotest_common.sh@960 -- # wait 75901 00:16:03.811 00:16:03.811 real 0m2.075s 00:16:03.811 user 0m5.804s 00:16:03.811 sys 0m0.381s 00:16:03.811 21:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:03.811 21:18:52 -- common/autotest_common.sh@10 -- # set +x 00:16:03.811 ************************************ 00:16:03.811 END TEST locking_overlapped_coremask 00:16:03.811 ************************************ 00:16:03.811 21:18:53 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:16:03.811 21:18:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:03.811 21:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.811 21:18:53 -- common/autotest_common.sh@10 -- # set +x 00:16:04.070 ************************************ 00:16:04.070 START TEST locking_overlapped_coremask_via_rpc 00:16:04.070 ************************************ 00:16:04.070 21:18:53 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:16:04.070 21:18:53 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75981 00:16:04.070 21:18:53 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:16:04.070 21:18:53 -- event/cpu_locks.sh@149 -- # waitforlisten 75981 /var/tmp/spdk.sock 00:16:04.070 21:18:53 -- common/autotest_common.sh@817 -- # '[' -z 75981 ']' 00:16:04.070 21:18:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.070 21:18:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:04.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.070 21:18:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.070 21:18:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:04.070 21:18:53 -- common/autotest_common.sh@10 -- # set +x 00:16:04.070 [2024-04-26 21:18:53.190651] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:04.070 [2024-04-26 21:18:53.190730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75981 ] 00:16:04.329 [2024-04-26 21:18:53.330593] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:04.330 [2024-04-26 21:18:53.330658] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.330 [2024-04-26 21:18:53.383783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.330 [2024-04-26 21:18:53.383967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.330 [2024-04-26 21:18:53.383969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.899 21:18:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:04.899 21:18:54 -- common/autotest_common.sh@850 -- # return 0 00:16:04.899 21:18:54 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:16:04.899 21:18:54 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76011 00:16:04.899 21:18:54 -- event/cpu_locks.sh@153 -- # waitforlisten 76011 /var/tmp/spdk2.sock 00:16:04.899 21:18:54 -- common/autotest_common.sh@817 -- # '[' -z 76011 ']' 00:16:04.899 21:18:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:04.899 21:18:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:04.899 21:18:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:04.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:04.899 21:18:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:04.899 21:18:54 -- common/autotest_common.sh@10 -- # set +x 00:16:04.899 [2024-04-26 21:18:54.126649] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:04.899 [2024-04-26 21:18:54.126720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76011 ] 00:16:05.158 [2024-04-26 21:18:54.261470] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:05.158 [2024-04-26 21:18:54.261515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:05.158 [2024-04-26 21:18:54.365265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.158 [2024-04-26 21:18:54.368414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.158 [2024-04-26 21:18:54.368418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:06.095 21:18:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.095 21:18:55 -- common/autotest_common.sh@850 -- # return 0 00:16:06.095 21:18:55 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:16:06.095 21:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.095 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:16:06.095 21:18:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.095 21:18:55 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:06.095 21:18:55 -- common/autotest_common.sh@638 -- # local es=0 00:16:06.095 21:18:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:06.095 21:18:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:06.095 21:18:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:06.095 21:18:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:06.095 21:18:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:06.095 21:18:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:06.095 21:18:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.095 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:16:06.095 [2024-04-26 21:18:55.064508] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75981 has claimed it. 00:16:06.095 2024/04/26 21:18:55 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:16:06.095 request: 00:16:06.095 { 00:16:06.095 "method": "framework_enable_cpumask_locks", 00:16:06.095 "params": {} 00:16:06.095 } 00:16:06.095 Got JSON-RPC error response 00:16:06.095 GoRPCClient: error on JSON-RPC call 00:16:06.095 21:18:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:06.095 21:18:55 -- common/autotest_common.sh@641 -- # es=1 00:16:06.095 21:18:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:06.095 21:18:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:06.095 21:18:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:06.095 21:18:55 -- event/cpu_locks.sh@158 -- # waitforlisten 75981 /var/tmp/spdk.sock 00:16:06.095 21:18:55 -- common/autotest_common.sh@817 -- # '[' -z 75981 ']' 00:16:06.095 21:18:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.095 21:18:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:06.095 21:18:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.095 21:18:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:06.095 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:16:06.354 21:18:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.354 21:18:55 -- common/autotest_common.sh@850 -- # return 0 00:16:06.354 21:18:55 -- event/cpu_locks.sh@159 -- # waitforlisten 76011 /var/tmp/spdk2.sock 00:16:06.354 21:18:55 -- common/autotest_common.sh@817 -- # '[' -z 76011 ']' 00:16:06.354 21:18:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:06.354 21:18:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:06.354 21:18:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:06.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:06.354 21:18:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:06.354 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:16:06.354 21:18:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.354 21:18:55 -- common/autotest_common.sh@850 -- # return 0 00:16:06.354 21:18:55 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:16:06.354 21:18:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:06.354 21:18:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:06.354 21:18:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:06.354 00:16:06.354 real 0m2.469s 00:16:06.354 user 0m1.193s 00:16:06.354 sys 0m0.209s 00:16:06.354 21:18:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:06.354 21:18:55 -- common/autotest_common.sh@10 -- # set +x 00:16:06.354 ************************************ 00:16:06.354 END TEST locking_overlapped_coremask_via_rpc 00:16:06.354 ************************************ 00:16:06.613 21:18:55 -- event/cpu_locks.sh@174 -- # cleanup 00:16:06.613 21:18:55 -- event/cpu_locks.sh@15 -- # [[ -z 75981 ]] 00:16:06.613 21:18:55 -- event/cpu_locks.sh@15 -- # killprocess 75981 00:16:06.613 21:18:55 -- common/autotest_common.sh@936 -- # '[' -z 75981 ']' 00:16:06.613 21:18:55 -- common/autotest_common.sh@940 -- # kill -0 75981 00:16:06.613 21:18:55 -- common/autotest_common.sh@941 -- # uname 00:16:06.613 21:18:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:06.613 21:18:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75981 00:16:06.613 21:18:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:06.613 21:18:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:06.613 21:18:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75981' 00:16:06.613 killing process with pid 75981 00:16:06.613 21:18:55 -- common/autotest_common.sh@955 -- # kill 75981 00:16:06.613 21:18:55 -- common/autotest_common.sh@960 -- # wait 75981 00:16:06.871 21:18:56 -- event/cpu_locks.sh@16 -- # [[ -z 76011 ]] 00:16:06.871 21:18:56 -- event/cpu_locks.sh@16 -- # killprocess 76011 00:16:06.871 21:18:56 -- common/autotest_common.sh@936 -- # '[' -z 76011 ']' 00:16:06.871 21:18:56 -- common/autotest_common.sh@940 -- # kill -0 76011 00:16:06.871 21:18:56 -- common/autotest_common.sh@941 -- # uname 00:16:06.871 21:18:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:06.871 21:18:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76011 00:16:06.871 21:18:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:06.871 21:18:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:06.871 21:18:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76011' 00:16:06.871 killing process with pid 76011 00:16:06.871 21:18:56 -- common/autotest_common.sh@955 -- # kill 76011 00:16:06.871 21:18:56 -- common/autotest_common.sh@960 -- # wait 76011 00:16:07.131 21:18:56 -- event/cpu_locks.sh@18 -- # rm -f 00:16:07.131 21:18:56 -- event/cpu_locks.sh@1 -- # cleanup 00:16:07.131 21:18:56 -- event/cpu_locks.sh@15 -- # [[ -z 75981 ]] 00:16:07.131 21:18:56 -- event/cpu_locks.sh@15 -- # killprocess 75981 00:16:07.131 21:18:56 -- common/autotest_common.sh@936 -- # '[' -z 75981 ']' 00:16:07.131 21:18:56 -- common/autotest_common.sh@940 -- # kill -0 75981 00:16:07.131 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75981) - No such process 00:16:07.131 Process with pid 75981 is not found 00:16:07.131 21:18:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75981 is not found' 00:16:07.131 21:18:56 -- event/cpu_locks.sh@16 -- # [[ -z 76011 ]] 00:16:07.131 21:18:56 -- event/cpu_locks.sh@16 -- # killprocess 76011 00:16:07.131 21:18:56 -- common/autotest_common.sh@936 -- # '[' -z 76011 ']' 00:16:07.131 21:18:56 -- common/autotest_common.sh@940 -- # kill -0 76011 00:16:07.131 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76011) - No such process 00:16:07.131 Process with pid 76011 is not found 00:16:07.131 21:18:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76011 is not found' 00:16:07.131 21:18:56 -- event/cpu_locks.sh@18 -- # rm -f 00:16:07.131 00:16:07.131 real 0m18.814s 00:16:07.131 user 0m32.707s 00:16:07.131 sys 0m5.099s 00:16:07.131 21:18:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:07.131 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:16:07.131 ************************************ 00:16:07.131 END TEST cpu_locks 00:16:07.131 ************************************ 00:16:07.390 00:16:07.390 real 0m45.794s 00:16:07.390 user 1m26.641s 00:16:07.390 sys 0m9.220s 00:16:07.390 21:18:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:07.390 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:16:07.390 ************************************ 00:16:07.390 END TEST event 00:16:07.390 ************************************ 00:16:07.390 21:18:56 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:07.390 21:18:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:07.390 21:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.390 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:16:07.390 ************************************ 00:16:07.390 START TEST thread 00:16:07.390 ************************************ 00:16:07.390 21:18:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:07.649 * Looking for test storage... 00:16:07.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:16:07.649 21:18:56 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:07.649 21:18:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:07.649 21:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.649 21:18:56 -- common/autotest_common.sh@10 -- # set +x 00:16:07.649 ************************************ 00:16:07.649 START TEST thread_poller_perf 00:16:07.649 ************************************ 00:16:07.649 21:18:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:07.649 [2024-04-26 21:18:56.798865] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:07.649 [2024-04-26 21:18:56.798974] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76168 ] 00:16:07.909 [2024-04-26 21:18:56.937641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.909 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:16:07.909 [2024-04-26 21:18:56.989373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.846 ====================================== 00:16:08.846 busy:2296784892 (cyc) 00:16:08.846 total_run_count: 336000 00:16:08.846 tsc_hz: 2290000000 (cyc) 00:16:08.846 ====================================== 00:16:08.846 poller_cost: 6835 (cyc), 2984 (nsec) 00:16:08.846 00:16:08.846 real 0m1.287s 00:16:08.846 user 0m1.131s 00:16:08.846 sys 0m0.050s 00:16:08.846 21:18:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:08.846 21:18:58 -- common/autotest_common.sh@10 -- # set +x 00:16:08.846 ************************************ 00:16:08.846 END TEST thread_poller_perf 00:16:08.846 ************************************ 00:16:09.105 21:18:58 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:09.105 21:18:58 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:09.105 21:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:09.105 21:18:58 -- common/autotest_common.sh@10 -- # set +x 00:16:09.105 ************************************ 00:16:09.105 START TEST thread_poller_perf 00:16:09.105 ************************************ 00:16:09.105 21:18:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:09.105 [2024-04-26 21:18:58.226868] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:09.105 [2024-04-26 21:18:58.226972] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76207 ] 00:16:09.364 [2024-04-26 21:18:58.368735] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.364 [2024-04-26 21:18:58.422400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.364 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:16:10.301 ====================================== 00:16:10.301 busy:2292307446 (cyc) 00:16:10.301 total_run_count: 4261000 00:16:10.301 tsc_hz: 2290000000 (cyc) 00:16:10.301 ====================================== 00:16:10.301 poller_cost: 537 (cyc), 234 (nsec) 00:16:10.301 00:16:10.301 real 0m1.292s 00:16:10.301 user 0m1.134s 00:16:10.301 sys 0m0.051s 00:16:10.301 21:18:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:10.301 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:16:10.301 ************************************ 00:16:10.301 END TEST thread_poller_perf 00:16:10.301 ************************************ 00:16:10.301 21:18:59 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:16:10.301 00:16:10.301 real 0m2.986s 00:16:10.301 user 0m2.410s 00:16:10.301 sys 0m0.345s 00:16:10.301 21:18:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:10.301 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:16:10.301 ************************************ 00:16:10.301 END TEST thread 00:16:10.301 ************************************ 00:16:10.559 21:18:59 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:10.559 21:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:10.559 21:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:10.559 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:16:10.559 ************************************ 00:16:10.559 START TEST accel 00:16:10.559 ************************************ 00:16:10.559 21:18:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:10.559 * Looking for test storage... 00:16:10.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:10.559 21:18:59 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:16:10.559 21:18:59 -- accel/accel.sh@82 -- # get_expected_opcs 00:16:10.559 21:18:59 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:10.818 21:18:59 -- accel/accel.sh@62 -- # spdk_tgt_pid=76288 00:16:10.818 21:18:59 -- accel/accel.sh@63 -- # waitforlisten 76288 00:16:10.818 21:18:59 -- common/autotest_common.sh@817 -- # '[' -z 76288 ']' 00:16:10.818 21:18:59 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:16:10.818 21:18:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.818 21:18:59 -- accel/accel.sh@61 -- # build_accel_config 00:16:10.818 21:18:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:10.818 21:18:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.818 21:18:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:10.818 21:18:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:10.818 21:18:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:10.818 21:18:59 -- common/autotest_common.sh@10 -- # set +x 00:16:10.818 21:18:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:10.818 21:18:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:10.818 21:18:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:10.818 21:18:59 -- accel/accel.sh@40 -- # local IFS=, 00:16:10.818 21:18:59 -- accel/accel.sh@41 -- # jq -r . 00:16:10.818 [2024-04-26 21:18:59.872802] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:10.818 [2024-04-26 21:18:59.872888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76288 ] 00:16:10.818 [2024-04-26 21:19:00.011579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.818 [2024-04-26 21:19:00.068612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.755 21:19:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:11.755 21:19:00 -- common/autotest_common.sh@850 -- # return 0 00:16:11.755 21:19:00 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:16:11.755 21:19:00 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:16:11.755 21:19:00 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:16:11.755 21:19:00 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:16:11.755 21:19:00 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:16:11.755 21:19:00 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:16:11.755 21:19:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:11.755 21:19:00 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:16:11.755 21:19:00 -- common/autotest_common.sh@10 -- # set +x 00:16:11.755 21:19:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:11.755 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.755 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.755 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.755 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.755 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.755 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.755 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.755 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.755 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.755 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.755 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.755 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.755 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.755 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.755 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.755 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.755 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.756 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.756 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.756 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.756 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.756 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.756 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.756 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.756 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.756 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.756 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.756 21:19:00 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # IFS== 00:16:11.756 21:19:00 -- accel/accel.sh@72 -- # read -r opc module 00:16:11.756 21:19:00 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:11.756 21:19:00 -- accel/accel.sh@75 -- # killprocess 76288 00:16:11.756 21:19:00 -- common/autotest_common.sh@936 -- # '[' -z 76288 ']' 00:16:11.756 21:19:00 -- common/autotest_common.sh@940 -- # kill -0 76288 00:16:11.756 21:19:00 -- common/autotest_common.sh@941 -- # uname 00:16:11.756 21:19:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:11.756 21:19:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76288 00:16:11.756 21:19:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:11.756 killing process with pid 76288 00:16:11.756 21:19:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:11.756 21:19:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76288' 00:16:11.756 21:19:00 -- common/autotest_common.sh@955 -- # kill 76288 00:16:11.756 21:19:00 -- common/autotest_common.sh@960 -- # wait 76288 00:16:12.015 21:19:01 -- accel/accel.sh@76 -- # trap - ERR 00:16:12.015 21:19:01 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:16:12.015 21:19:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:12.015 21:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.015 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:16:12.276 21:19:01 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:16:12.276 21:19:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:16:12.276 21:19:01 -- accel/accel.sh@12 -- # build_accel_config 00:16:12.276 21:19:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:12.276 21:19:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:12.276 21:19:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:12.276 21:19:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:12.276 21:19:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:12.276 21:19:01 -- accel/accel.sh@40 -- # local IFS=, 00:16:12.276 21:19:01 -- accel/accel.sh@41 -- # jq -r . 00:16:12.276 21:19:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:12.276 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:16:12.276 21:19:01 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:16:12.276 21:19:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:12.276 21:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.276 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:16:12.276 ************************************ 00:16:12.276 START TEST accel_missing_filename 00:16:12.276 ************************************ 00:16:12.276 21:19:01 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:16:12.276 21:19:01 -- common/autotest_common.sh@638 -- # local es=0 00:16:12.276 21:19:01 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:16:12.276 21:19:01 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:12.276 21:19:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:12.276 21:19:01 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:12.276 21:19:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:12.276 21:19:01 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:16:12.276 21:19:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:16:12.276 21:19:01 -- accel/accel.sh@12 -- # build_accel_config 00:16:12.276 21:19:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:12.276 21:19:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:12.276 21:19:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:12.276 21:19:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:12.276 21:19:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:12.276 21:19:01 -- accel/accel.sh@40 -- # local IFS=, 00:16:12.276 21:19:01 -- accel/accel.sh@41 -- # jq -r . 00:16:12.276 [2024-04-26 21:19:01.489278] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:12.276 [2024-04-26 21:19:01.489370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76367 ] 00:16:12.536 [2024-04-26 21:19:01.630108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.536 [2024-04-26 21:19:01.677002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.536 [2024-04-26 21:19:01.717424] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:12.536 [2024-04-26 21:19:01.776586] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:16:12.804 A filename is required. 00:16:12.804 21:19:01 -- common/autotest_common.sh@641 -- # es=234 00:16:12.804 21:19:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:12.804 21:19:01 -- common/autotest_common.sh@650 -- # es=106 00:16:12.804 21:19:01 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:12.804 21:19:01 -- common/autotest_common.sh@658 -- # es=1 00:16:12.804 21:19:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:12.804 00:16:12.804 real 0m0.391s 00:16:12.804 user 0m0.243s 00:16:12.804 sys 0m0.089s 00:16:12.804 21:19:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:12.804 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:16:12.804 ************************************ 00:16:12.804 END TEST accel_missing_filename 00:16:12.804 ************************************ 00:16:12.804 21:19:01 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.804 21:19:01 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:16:12.804 21:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:12.804 21:19:01 -- common/autotest_common.sh@10 -- # set +x 00:16:12.804 ************************************ 00:16:12.804 START TEST accel_compress_verify 00:16:12.804 ************************************ 00:16:12.804 21:19:01 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.804 21:19:01 -- common/autotest_common.sh@638 -- # local es=0 00:16:12.804 21:19:01 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.804 21:19:01 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:12.804 21:19:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:12.804 21:19:01 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:12.804 21:19:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:12.804 21:19:01 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.804 21:19:01 -- accel/accel.sh@12 -- # build_accel_config 00:16:12.804 21:19:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:12.804 21:19:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:12.804 21:19:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:12.804 21:19:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:12.804 21:19:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:12.804 21:19:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:12.804 21:19:01 -- accel/accel.sh@40 -- # local IFS=, 00:16:12.804 21:19:01 -- accel/accel.sh@41 -- # jq -r . 00:16:12.804 [2024-04-26 21:19:02.017990] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:12.804 [2024-04-26 21:19:02.018078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76396 ] 00:16:13.079 [2024-04-26 21:19:02.159032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.079 [2024-04-26 21:19:02.212098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.079 [2024-04-26 21:19:02.255317] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.079 [2024-04-26 21:19:02.315869] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:16:13.338 00:16:13.338 Compression does not support the verify option, aborting. 00:16:13.338 21:19:02 -- common/autotest_common.sh@641 -- # es=161 00:16:13.338 21:19:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:13.339 21:19:02 -- common/autotest_common.sh@650 -- # es=33 00:16:13.339 21:19:02 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:13.339 21:19:02 -- common/autotest_common.sh@658 -- # es=1 00:16:13.339 21:19:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:13.339 00:16:13.339 real 0m0.412s 00:16:13.339 user 0m0.248s 00:16:13.339 sys 0m0.102s 00:16:13.339 21:19:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:13.339 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:16:13.339 ************************************ 00:16:13.339 END TEST accel_compress_verify 00:16:13.339 ************************************ 00:16:13.339 21:19:02 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:16:13.339 21:19:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:13.339 21:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.339 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:16:13.339 ************************************ 00:16:13.339 START TEST accel_wrong_workload 00:16:13.339 ************************************ 00:16:13.339 21:19:02 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:16:13.339 21:19:02 -- common/autotest_common.sh@638 -- # local es=0 00:16:13.339 21:19:02 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:16:13.339 21:19:02 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:13.339 21:19:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:13.339 21:19:02 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:13.339 21:19:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:13.339 21:19:02 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:16:13.339 21:19:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:16:13.339 21:19:02 -- accel/accel.sh@12 -- # build_accel_config 00:16:13.339 21:19:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:13.339 21:19:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:13.339 21:19:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:13.339 21:19:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:13.339 21:19:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:13.339 21:19:02 -- accel/accel.sh@40 -- # local IFS=, 00:16:13.339 21:19:02 -- accel/accel.sh@41 -- # jq -r . 00:16:13.339 Unsupported workload type: foobar 00:16:13.339 [2024-04-26 21:19:02.549241] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:16:13.339 accel_perf options: 00:16:13.339 [-h help message] 00:16:13.339 [-q queue depth per core] 00:16:13.339 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:13.339 [-T number of threads per core 00:16:13.339 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:13.339 [-t time in seconds] 00:16:13.339 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:13.339 [ dif_verify, , dif_generate, dif_generate_copy 00:16:13.339 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:13.339 [-l for compress/decompress workloads, name of uncompressed input file 00:16:13.339 [-S for crc32c workload, use this seed value (default 0) 00:16:13.339 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:13.339 [-f for fill workload, use this BYTE value (default 255) 00:16:13.339 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:13.339 [-y verify result if this switch is on] 00:16:13.339 [-a tasks to allocate per core (default: same value as -q)] 00:16:13.339 Can be used to spread operations across a wider range of memory. 00:16:13.339 21:19:02 -- common/autotest_common.sh@641 -- # es=1 00:16:13.339 21:19:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:13.339 21:19:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:13.339 21:19:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:13.339 00:16:13.339 real 0m0.035s 00:16:13.339 user 0m0.020s 00:16:13.339 sys 0m0.015s 00:16:13.339 21:19:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:13.339 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:16:13.339 ************************************ 00:16:13.339 END TEST accel_wrong_workload 00:16:13.339 ************************************ 00:16:13.599 21:19:02 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:16:13.599 21:19:02 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:16:13.599 21:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.599 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:16:13.599 ************************************ 00:16:13.599 START TEST accel_negative_buffers 00:16:13.599 ************************************ 00:16:13.599 21:19:02 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:16:13.599 21:19:02 -- common/autotest_common.sh@638 -- # local es=0 00:16:13.599 21:19:02 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:16:13.599 21:19:02 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:13.599 21:19:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:13.599 21:19:02 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:13.599 21:19:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:13.599 21:19:02 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:16:13.599 21:19:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:16:13.599 21:19:02 -- accel/accel.sh@12 -- # build_accel_config 00:16:13.599 21:19:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:13.599 21:19:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:13.599 21:19:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:13.599 21:19:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:13.599 21:19:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:13.599 21:19:02 -- accel/accel.sh@40 -- # local IFS=, 00:16:13.599 21:19:02 -- accel/accel.sh@41 -- # jq -r . 00:16:13.599 -x option must be non-negative. 00:16:13.599 [2024-04-26 21:19:02.718191] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:16:13.599 accel_perf options: 00:16:13.599 [-h help message] 00:16:13.599 [-q queue depth per core] 00:16:13.599 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:13.599 [-T number of threads per core 00:16:13.599 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:13.599 [-t time in seconds] 00:16:13.599 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:13.599 [ dif_verify, , dif_generate, dif_generate_copy 00:16:13.599 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:13.599 [-l for compress/decompress workloads, name of uncompressed input file 00:16:13.599 [-S for crc32c workload, use this seed value (default 0) 00:16:13.599 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:13.599 [-f for fill workload, use this BYTE value (default 255) 00:16:13.599 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:13.599 [-y verify result if this switch is on] 00:16:13.599 [-a tasks to allocate per core (default: same value as -q)] 00:16:13.599 Can be used to spread operations across a wider range of memory. 00:16:13.599 21:19:02 -- common/autotest_common.sh@641 -- # es=1 00:16:13.599 21:19:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:13.599 21:19:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:13.599 21:19:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:13.599 00:16:13.599 real 0m0.026s 00:16:13.599 user 0m0.011s 00:16:13.599 sys 0m0.015s 00:16:13.599 21:19:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:13.599 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:16:13.599 ************************************ 00:16:13.599 END TEST accel_negative_buffers 00:16:13.599 ************************************ 00:16:13.599 21:19:02 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:16:13.599 21:19:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:13.599 21:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.599 21:19:02 -- common/autotest_common.sh@10 -- # set +x 00:16:13.864 ************************************ 00:16:13.864 START TEST accel_crc32c 00:16:13.864 ************************************ 00:16:13.864 21:19:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:16:13.864 21:19:02 -- accel/accel.sh@16 -- # local accel_opc 00:16:13.864 21:19:02 -- accel/accel.sh@17 -- # local accel_module 00:16:13.864 21:19:02 -- accel/accel.sh@19 -- # IFS=: 00:16:13.864 21:19:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:16:13.864 21:19:02 -- accel/accel.sh@19 -- # read -r var val 00:16:13.864 21:19:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:16:13.864 21:19:02 -- accel/accel.sh@12 -- # build_accel_config 00:16:13.864 21:19:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:13.864 21:19:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:13.864 21:19:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:13.864 21:19:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:13.864 21:19:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:13.864 21:19:02 -- accel/accel.sh@40 -- # local IFS=, 00:16:13.864 21:19:02 -- accel/accel.sh@41 -- # jq -r . 00:16:13.864 [2024-04-26 21:19:02.903045] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:13.864 [2024-04-26 21:19:02.903133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76473 ] 00:16:13.864 [2024-04-26 21:19:03.042160] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.864 [2024-04-26 21:19:03.093837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val= 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val= 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val=0x1 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val= 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val= 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val=crc32c 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val=32 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val= 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val=software 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@22 -- # accel_module=software 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val=32 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val=32 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val=1 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val=Yes 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val= 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:14.127 21:19:03 -- accel/accel.sh@20 -- # val= 00:16:14.127 21:19:03 -- accel/accel.sh@21 -- # case "$var" in 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # IFS=: 00:16:14.127 21:19:03 -- accel/accel.sh@19 -- # read -r var val 00:16:15.066 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.066 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.066 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.066 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.066 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.066 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.066 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.066 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.066 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.066 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.066 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.066 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.066 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.066 21:19:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:15.066 21:19:04 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:15.067 21:19:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:15.067 00:16:15.067 real 0m1.400s 00:16:15.067 user 0m1.216s 00:16:15.067 sys 0m0.097s 00:16:15.067 21:19:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:15.067 21:19:04 -- common/autotest_common.sh@10 -- # set +x 00:16:15.067 ************************************ 00:16:15.067 END TEST accel_crc32c 00:16:15.067 ************************************ 00:16:15.326 21:19:04 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:16:15.326 21:19:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:15.326 21:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:15.326 21:19:04 -- common/autotest_common.sh@10 -- # set +x 00:16:15.326 ************************************ 00:16:15.326 START TEST accel_crc32c_C2 00:16:15.326 ************************************ 00:16:15.326 21:19:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:16:15.326 21:19:04 -- accel/accel.sh@16 -- # local accel_opc 00:16:15.326 21:19:04 -- accel/accel.sh@17 -- # local accel_module 00:16:15.326 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.326 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.326 21:19:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:16:15.326 21:19:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:16:15.326 21:19:04 -- accel/accel.sh@12 -- # build_accel_config 00:16:15.326 21:19:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:15.326 21:19:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:15.326 21:19:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:15.326 21:19:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:15.326 21:19:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:15.326 21:19:04 -- accel/accel.sh@40 -- # local IFS=, 00:16:15.326 21:19:04 -- accel/accel.sh@41 -- # jq -r . 00:16:15.326 [2024-04-26 21:19:04.433183] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:15.326 [2024-04-26 21:19:04.433277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76506 ] 00:16:15.326 [2024-04-26 21:19:04.571509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.586 [2024-04-26 21:19:04.619696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val=0x1 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val=crc32c 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val=0 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val=software 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@22 -- # accel_module=software 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val=32 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val=32 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val=1 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val=Yes 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:15.586 21:19:04 -- accel/accel.sh@20 -- # val= 00:16:15.586 21:19:04 -- accel/accel.sh@21 -- # case "$var" in 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # IFS=: 00:16:15.586 21:19:04 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:05 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:05 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:05 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:05 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:05 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:05 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:05 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:16.975 21:19:05 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:16.975 21:19:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:16.975 00:16:16.975 real 0m1.394s 00:16:16.975 user 0m1.218s 00:16:16.975 sys 0m0.092s 00:16:16.975 21:19:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:16.975 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:16:16.975 ************************************ 00:16:16.975 END TEST accel_crc32c_C2 00:16:16.975 ************************************ 00:16:16.975 21:19:05 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:16:16.975 21:19:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:16.975 21:19:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:16.975 21:19:05 -- common/autotest_common.sh@10 -- # set +x 00:16:16.975 ************************************ 00:16:16.975 START TEST accel_copy 00:16:16.975 ************************************ 00:16:16.975 21:19:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:16:16.975 21:19:05 -- accel/accel.sh@16 -- # local accel_opc 00:16:16.975 21:19:05 -- accel/accel.sh@17 -- # local accel_module 00:16:16.975 21:19:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:05 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:16:16.975 21:19:05 -- accel/accel.sh@12 -- # build_accel_config 00:16:16.975 21:19:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:16.975 21:19:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:16.975 21:19:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:16.975 21:19:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:16.975 21:19:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:16.975 21:19:05 -- accel/accel.sh@40 -- # local IFS=, 00:16:16.975 21:19:05 -- accel/accel.sh@41 -- # jq -r . 00:16:16.975 [2024-04-26 21:19:05.944981] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:16.975 [2024-04-26 21:19:05.945039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76550 ] 00:16:16.975 [2024-04-26 21:19:06.085543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.975 [2024-04-26 21:19:06.132054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val=0x1 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val=copy 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@23 -- # accel_opc=copy 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val=software 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@22 -- # accel_module=software 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val=32 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val=32 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val=1 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val=Yes 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:16.975 21:19:06 -- accel/accel.sh@20 -- # val= 00:16:16.975 21:19:06 -- accel/accel.sh@21 -- # case "$var" in 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # IFS=: 00:16:16.975 21:19:06 -- accel/accel.sh@19 -- # read -r var val 00:16:18.355 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.355 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.355 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.355 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.355 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.355 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.355 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.355 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.355 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.355 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.355 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.355 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.355 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.355 21:19:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:18.355 21:19:07 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:16:18.355 21:19:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:18.355 00:16:18.355 real 0m1.389s 00:16:18.355 user 0m1.202s 00:16:18.355 sys 0m0.099s 00:16:18.355 21:19:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:18.355 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:16:18.355 ************************************ 00:16:18.355 END TEST accel_copy 00:16:18.355 ************************************ 00:16:18.355 21:19:07 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:18.355 21:19:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:18.356 21:19:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.356 21:19:07 -- common/autotest_common.sh@10 -- # set +x 00:16:18.356 ************************************ 00:16:18.356 START TEST accel_fill 00:16:18.356 ************************************ 00:16:18.356 21:19:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:18.356 21:19:07 -- accel/accel.sh@16 -- # local accel_opc 00:16:18.356 21:19:07 -- accel/accel.sh@17 -- # local accel_module 00:16:18.356 21:19:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:18.356 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.356 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.356 21:19:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:16:18.356 21:19:07 -- accel/accel.sh@12 -- # build_accel_config 00:16:18.356 21:19:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:18.356 21:19:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:18.356 21:19:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:18.356 21:19:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:18.356 21:19:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:18.356 21:19:07 -- accel/accel.sh@40 -- # local IFS=, 00:16:18.356 21:19:07 -- accel/accel.sh@41 -- # jq -r . 00:16:18.356 [2024-04-26 21:19:07.472125] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:18.356 [2024-04-26 21:19:07.472258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76583 ] 00:16:18.615 [2024-04-26 21:19:07.612805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.615 [2024-04-26 21:19:07.661340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val=0x1 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val=fill 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@23 -- # accel_opc=fill 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val=0x80 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val=software 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@22 -- # accel_module=software 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val=64 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val=64 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val=1 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val=Yes 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:18.615 21:19:07 -- accel/accel.sh@20 -- # val= 00:16:18.615 21:19:07 -- accel/accel.sh@21 -- # case "$var" in 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # IFS=: 00:16:18.615 21:19:07 -- accel/accel.sh@19 -- # read -r var val 00:16:19.994 21:19:08 -- accel/accel.sh@20 -- # val= 00:16:19.994 21:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # IFS=: 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # read -r var val 00:16:19.994 21:19:08 -- accel/accel.sh@20 -- # val= 00:16:19.994 21:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # IFS=: 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # read -r var val 00:16:19.994 21:19:08 -- accel/accel.sh@20 -- # val= 00:16:19.994 21:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # IFS=: 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # read -r var val 00:16:19.994 21:19:08 -- accel/accel.sh@20 -- # val= 00:16:19.994 21:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # IFS=: 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # read -r var val 00:16:19.994 21:19:08 -- accel/accel.sh@20 -- # val= 00:16:19.994 21:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # IFS=: 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # read -r var val 00:16:19.994 21:19:08 -- accel/accel.sh@20 -- # val= 00:16:19.994 21:19:08 -- accel/accel.sh@21 -- # case "$var" in 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # IFS=: 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # read -r var val 00:16:19.994 21:19:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:19.994 21:19:08 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:16:19.994 21:19:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:19.994 00:16:19.994 real 0m1.404s 00:16:19.994 user 0m1.214s 00:16:19.994 sys 0m0.093s 00:16:19.994 21:19:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.994 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:16:19.994 ************************************ 00:16:19.994 END TEST accel_fill 00:16:19.994 ************************************ 00:16:19.994 21:19:08 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:16:19.994 21:19:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:19.994 21:19:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.994 21:19:08 -- common/autotest_common.sh@10 -- # set +x 00:16:19.994 ************************************ 00:16:19.994 START TEST accel_copy_crc32c 00:16:19.994 ************************************ 00:16:19.994 21:19:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:16:19.994 21:19:08 -- accel/accel.sh@16 -- # local accel_opc 00:16:19.994 21:19:08 -- accel/accel.sh@17 -- # local accel_module 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # IFS=: 00:16:19.994 21:19:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:16:19.994 21:19:08 -- accel/accel.sh@19 -- # read -r var val 00:16:19.994 21:19:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:16:19.994 21:19:08 -- accel/accel.sh@12 -- # build_accel_config 00:16:19.994 21:19:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:19.994 21:19:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:19.994 21:19:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:19.994 21:19:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:19.994 21:19:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:19.994 21:19:08 -- accel/accel.sh@40 -- # local IFS=, 00:16:19.994 21:19:08 -- accel/accel.sh@41 -- # jq -r . 00:16:19.994 [2024-04-26 21:19:09.011832] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:19.994 [2024-04-26 21:19:09.012018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76627 ] 00:16:19.994 [2024-04-26 21:19:09.151304] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.994 [2024-04-26 21:19:09.203807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val= 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val= 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val=0x1 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val= 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val= 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val=0 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val= 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val=software 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@22 -- # accel_module=software 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val=32 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val=32 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val=1 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val=Yes 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val= 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:20.253 21:19:09 -- accel/accel.sh@20 -- # val= 00:16:20.253 21:19:09 -- accel/accel.sh@21 -- # case "$var" in 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # IFS=: 00:16:20.253 21:19:09 -- accel/accel.sh@19 -- # read -r var val 00:16:21.190 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.190 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.190 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.190 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.190 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.190 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.190 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.190 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.190 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.190 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.190 ************************************ 00:16:21.190 END TEST accel_copy_crc32c 00:16:21.190 ************************************ 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.190 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.190 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.190 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.190 21:19:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:21.190 21:19:10 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:21.190 21:19:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:21.190 00:16:21.190 real 0m1.407s 00:16:21.190 user 0m1.223s 00:16:21.190 sys 0m0.093s 00:16:21.190 21:19:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:21.190 21:19:10 -- common/autotest_common.sh@10 -- # set +x 00:16:21.190 21:19:10 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:16:21.190 21:19:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:21.190 21:19:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:21.190 21:19:10 -- common/autotest_common.sh@10 -- # set +x 00:16:21.449 ************************************ 00:16:21.449 START TEST accel_copy_crc32c_C2 00:16:21.449 ************************************ 00:16:21.449 21:19:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:16:21.449 21:19:10 -- accel/accel.sh@16 -- # local accel_opc 00:16:21.449 21:19:10 -- accel/accel.sh@17 -- # local accel_module 00:16:21.449 21:19:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:16:21.449 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.449 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.449 21:19:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:16:21.449 21:19:10 -- accel/accel.sh@12 -- # build_accel_config 00:16:21.449 21:19:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:21.449 21:19:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:21.449 21:19:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:21.449 21:19:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:21.449 21:19:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:21.449 21:19:10 -- accel/accel.sh@40 -- # local IFS=, 00:16:21.449 21:19:10 -- accel/accel.sh@41 -- # jq -r . 00:16:21.449 [2024-04-26 21:19:10.519830] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:21.449 [2024-04-26 21:19:10.520002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76661 ] 00:16:21.449 [2024-04-26 21:19:10.663326] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.709 [2024-04-26 21:19:10.716652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val=0x1 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val=copy_crc32c 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val=0 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val='8192 bytes' 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val=software 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@22 -- # accel_module=software 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val=32 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val=32 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val=1 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val=Yes 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:21.709 21:19:10 -- accel/accel.sh@20 -- # val= 00:16:21.709 21:19:10 -- accel/accel.sh@21 -- # case "$var" in 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # IFS=: 00:16:21.709 21:19:10 -- accel/accel.sh@19 -- # read -r var val 00:16:22.645 21:19:11 -- accel/accel.sh@20 -- # val= 00:16:22.645 21:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.645 21:19:11 -- accel/accel.sh@19 -- # IFS=: 00:16:22.645 21:19:11 -- accel/accel.sh@19 -- # read -r var val 00:16:22.645 21:19:11 -- accel/accel.sh@20 -- # val= 00:16:22.904 21:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # IFS=: 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # read -r var val 00:16:22.904 21:19:11 -- accel/accel.sh@20 -- # val= 00:16:22.904 21:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # IFS=: 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # read -r var val 00:16:22.904 21:19:11 -- accel/accel.sh@20 -- # val= 00:16:22.904 21:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # IFS=: 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # read -r var val 00:16:22.904 21:19:11 -- accel/accel.sh@20 -- # val= 00:16:22.904 21:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # IFS=: 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # read -r var val 00:16:22.904 21:19:11 -- accel/accel.sh@20 -- # val= 00:16:22.904 21:19:11 -- accel/accel.sh@21 -- # case "$var" in 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # IFS=: 00:16:22.904 21:19:11 -- accel/accel.sh@19 -- # read -r var val 00:16:22.904 21:19:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:22.904 ************************************ 00:16:22.904 END TEST accel_copy_crc32c_C2 00:16:22.904 ************************************ 00:16:22.904 21:19:11 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:16:22.904 21:19:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:22.904 00:16:22.904 real 0m1.406s 00:16:22.904 user 0m1.222s 00:16:22.904 sys 0m0.098s 00:16:22.904 21:19:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.904 21:19:11 -- common/autotest_common.sh@10 -- # set +x 00:16:22.904 21:19:11 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:16:22.904 21:19:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:22.904 21:19:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.904 21:19:11 -- common/autotest_common.sh@10 -- # set +x 00:16:22.904 ************************************ 00:16:22.904 START TEST accel_dualcast 00:16:22.904 ************************************ 00:16:22.904 21:19:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:16:22.904 21:19:12 -- accel/accel.sh@16 -- # local accel_opc 00:16:22.904 21:19:12 -- accel/accel.sh@17 -- # local accel_module 00:16:22.904 21:19:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:16:22.904 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:22.904 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:22.904 21:19:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:16:22.904 21:19:12 -- accel/accel.sh@12 -- # build_accel_config 00:16:22.904 21:19:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:22.904 21:19:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:22.904 21:19:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:22.904 21:19:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:22.904 21:19:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:22.904 21:19:12 -- accel/accel.sh@40 -- # local IFS=, 00:16:22.905 21:19:12 -- accel/accel.sh@41 -- # jq -r . 00:16:22.905 [2024-04-26 21:19:12.049301] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:22.905 [2024-04-26 21:19:12.049427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76705 ] 00:16:23.165 [2024-04-26 21:19:12.197038] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.165 [2024-04-26 21:19:12.248569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val= 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val= 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val=0x1 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val= 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val= 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val=dualcast 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val= 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val=software 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@22 -- # accel_module=software 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val=32 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val=32 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val=1 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val=Yes 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val= 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:23.165 21:19:12 -- accel/accel.sh@20 -- # val= 00:16:23.165 21:19:12 -- accel/accel.sh@21 -- # case "$var" in 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # IFS=: 00:16:23.165 21:19:12 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:24.571 21:19:13 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:16:24.571 21:19:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:24.571 00:16:24.571 real 0m1.401s 00:16:24.571 user 0m0.010s 00:16:24.571 sys 0m0.003s 00:16:24.571 21:19:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:24.571 21:19:13 -- common/autotest_common.sh@10 -- # set +x 00:16:24.571 ************************************ 00:16:24.571 END TEST accel_dualcast 00:16:24.571 ************************************ 00:16:24.571 21:19:13 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:16:24.571 21:19:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:24.571 21:19:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.571 21:19:13 -- common/autotest_common.sh@10 -- # set +x 00:16:24.571 ************************************ 00:16:24.571 START TEST accel_compare 00:16:24.571 ************************************ 00:16:24.571 21:19:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:16:24.571 21:19:13 -- accel/accel.sh@16 -- # local accel_opc 00:16:24.571 21:19:13 -- accel/accel.sh@17 -- # local accel_module 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:16:24.571 21:19:13 -- accel/accel.sh@12 -- # build_accel_config 00:16:24.571 21:19:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:24.571 21:19:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:24.571 21:19:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:24.571 21:19:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:24.571 21:19:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:24.571 21:19:13 -- accel/accel.sh@40 -- # local IFS=, 00:16:24.571 21:19:13 -- accel/accel.sh@41 -- # jq -r . 00:16:24.571 [2024-04-26 21:19:13.565868] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:24.571 [2024-04-26 21:19:13.565960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76739 ] 00:16:24.571 [2024-04-26 21:19:13.705286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.571 [2024-04-26 21:19:13.756941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val=0x1 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val=compare 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@23 -- # accel_opc=compare 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val=software 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@22 -- # accel_module=software 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val=32 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val=32 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val=1 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val=Yes 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:24.571 21:19:13 -- accel/accel.sh@20 -- # val= 00:16:24.571 21:19:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # IFS=: 00:16:24.571 21:19:13 -- accel/accel.sh@19 -- # read -r var val 00:16:25.945 21:19:14 -- accel/accel.sh@20 -- # val= 00:16:25.945 21:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # IFS=: 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # read -r var val 00:16:25.945 21:19:14 -- accel/accel.sh@20 -- # val= 00:16:25.945 21:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # IFS=: 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # read -r var val 00:16:25.945 21:19:14 -- accel/accel.sh@20 -- # val= 00:16:25.945 21:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # IFS=: 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # read -r var val 00:16:25.945 21:19:14 -- accel/accel.sh@20 -- # val= 00:16:25.945 21:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # IFS=: 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # read -r var val 00:16:25.945 21:19:14 -- accel/accel.sh@20 -- # val= 00:16:25.945 21:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # IFS=: 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # read -r var val 00:16:25.945 21:19:14 -- accel/accel.sh@20 -- # val= 00:16:25.945 21:19:14 -- accel/accel.sh@21 -- # case "$var" in 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # IFS=: 00:16:25.945 21:19:14 -- accel/accel.sh@19 -- # read -r var val 00:16:25.945 21:19:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:25.945 21:19:14 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:16:25.945 21:19:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:25.945 00:16:25.945 real 0m1.396s 00:16:25.945 user 0m0.010s 00:16:25.945 sys 0m0.003s 00:16:25.945 21:19:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.945 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:16:25.945 ************************************ 00:16:25.945 END TEST accel_compare 00:16:25.945 ************************************ 00:16:25.945 21:19:14 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:16:25.945 21:19:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:25.945 21:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.945 21:19:14 -- common/autotest_common.sh@10 -- # set +x 00:16:25.945 ************************************ 00:16:25.945 START TEST accel_xor 00:16:25.945 ************************************ 00:16:25.945 21:19:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:16:25.945 21:19:15 -- accel/accel.sh@16 -- # local accel_opc 00:16:25.945 21:19:15 -- accel/accel.sh@17 -- # local accel_module 00:16:25.945 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:25.945 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:25.945 21:19:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:16:25.945 21:19:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:16:25.945 21:19:15 -- accel/accel.sh@12 -- # build_accel_config 00:16:25.945 21:19:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:25.945 21:19:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:25.945 21:19:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:25.945 21:19:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:25.945 21:19:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:25.945 21:19:15 -- accel/accel.sh@40 -- # local IFS=, 00:16:25.945 21:19:15 -- accel/accel.sh@41 -- # jq -r . 00:16:25.945 [2024-04-26 21:19:15.065708] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:25.945 [2024-04-26 21:19:15.065872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76778 ] 00:16:26.202 [2024-04-26 21:19:15.209377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.202 [2024-04-26 21:19:15.260571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val= 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val= 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val=0x1 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val= 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val= 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val=xor 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@23 -- # accel_opc=xor 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val=2 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val= 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val=software 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@22 -- # accel_module=software 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val=32 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val=32 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.202 21:19:15 -- accel/accel.sh@20 -- # val=1 00:16:26.202 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.202 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.203 21:19:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:26.203 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.203 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.203 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.203 21:19:15 -- accel/accel.sh@20 -- # val=Yes 00:16:26.203 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.203 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.203 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.203 21:19:15 -- accel/accel.sh@20 -- # val= 00:16:26.203 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.203 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.203 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:26.203 21:19:15 -- accel/accel.sh@20 -- # val= 00:16:26.203 21:19:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:26.203 21:19:15 -- accel/accel.sh@19 -- # IFS=: 00:16:26.203 21:19:15 -- accel/accel.sh@19 -- # read -r var val 00:16:27.575 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.575 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.575 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.575 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.575 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.575 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.575 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.575 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.575 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.575 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.575 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.575 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.575 21:19:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:27.575 21:19:16 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:27.575 21:19:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:27.575 00:16:27.575 real 0m1.401s 00:16:27.575 user 0m1.214s 00:16:27.575 sys 0m0.090s 00:16:27.575 21:19:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:27.575 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:16:27.575 ************************************ 00:16:27.575 END TEST accel_xor 00:16:27.575 ************************************ 00:16:27.575 21:19:16 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:16:27.575 21:19:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:27.575 21:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.575 21:19:16 -- common/autotest_common.sh@10 -- # set +x 00:16:27.575 ************************************ 00:16:27.575 START TEST accel_xor 00:16:27.575 ************************************ 00:16:27.575 21:19:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:16:27.575 21:19:16 -- accel/accel.sh@16 -- # local accel_opc 00:16:27.575 21:19:16 -- accel/accel.sh@17 -- # local accel_module 00:16:27.575 21:19:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.575 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.575 21:19:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:16:27.575 21:19:16 -- accel/accel.sh@12 -- # build_accel_config 00:16:27.575 21:19:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:27.575 21:19:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:27.576 21:19:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:27.576 21:19:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:27.576 21:19:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:27.576 21:19:16 -- accel/accel.sh@40 -- # local IFS=, 00:16:27.576 21:19:16 -- accel/accel.sh@41 -- # jq -r . 00:16:27.576 [2024-04-26 21:19:16.562026] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:27.576 [2024-04-26 21:19:16.562103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76817 ] 00:16:27.576 [2024-04-26 21:19:16.700733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.576 [2024-04-26 21:19:16.752525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val=0x1 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val=xor 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@23 -- # accel_opc=xor 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val=3 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val=software 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@22 -- # accel_module=software 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val=32 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val=32 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val=1 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val=Yes 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:27.576 21:19:16 -- accel/accel.sh@20 -- # val= 00:16:27.576 21:19:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # IFS=: 00:16:27.576 21:19:16 -- accel/accel.sh@19 -- # read -r var val 00:16:28.956 21:19:17 -- accel/accel.sh@20 -- # val= 00:16:28.956 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:16:28.956 21:19:17 -- accel/accel.sh@20 -- # val= 00:16:28.956 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:16:28.956 21:19:17 -- accel/accel.sh@20 -- # val= 00:16:28.956 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:16:28.956 21:19:17 -- accel/accel.sh@20 -- # val= 00:16:28.956 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:16:28.956 21:19:17 -- accel/accel.sh@20 -- # val= 00:16:28.956 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:16:28.956 21:19:17 -- accel/accel.sh@20 -- # val= 00:16:28.956 21:19:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # IFS=: 00:16:28.956 21:19:17 -- accel/accel.sh@19 -- # read -r var val 00:16:28.956 21:19:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:28.956 21:19:17 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:16:28.956 21:19:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:28.956 00:16:28.956 real 0m1.393s 00:16:28.956 user 0m1.214s 00:16:28.956 sys 0m0.090s 00:16:28.956 21:19:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:28.956 ************************************ 00:16:28.956 END TEST accel_xor 00:16:28.956 ************************************ 00:16:28.956 21:19:17 -- common/autotest_common.sh@10 -- # set +x 00:16:28.956 21:19:17 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:16:28.956 21:19:17 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:28.956 21:19:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.956 21:19:17 -- common/autotest_common.sh@10 -- # set +x 00:16:28.956 ************************************ 00:16:28.956 START TEST accel_dif_verify 00:16:28.956 ************************************ 00:16:28.956 21:19:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:16:28.956 21:19:18 -- accel/accel.sh@16 -- # local accel_opc 00:16:28.956 21:19:18 -- accel/accel.sh@17 -- # local accel_module 00:16:28.956 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:28.956 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:28.956 21:19:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:16:28.956 21:19:18 -- accel/accel.sh@12 -- # build_accel_config 00:16:28.956 21:19:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:16:28.956 21:19:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:28.956 21:19:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:28.956 21:19:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:28.956 21:19:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:28.956 21:19:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:28.956 21:19:18 -- accel/accel.sh@40 -- # local IFS=, 00:16:28.956 21:19:18 -- accel/accel.sh@41 -- # jq -r . 00:16:28.956 [2024-04-26 21:19:18.096282] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:28.956 [2024-04-26 21:19:18.096367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76856 ] 00:16:29.222 [2024-04-26 21:19:18.237768] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.222 [2024-04-26 21:19:18.289433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val= 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val= 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val=0x1 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val= 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val= 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val=dif_verify 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val='512 bytes' 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val='8 bytes' 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val= 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val=software 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@22 -- # accel_module=software 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val=32 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val=32 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val=1 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.222 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.222 21:19:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:29.222 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.223 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.223 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.223 21:19:18 -- accel/accel.sh@20 -- # val=No 00:16:29.223 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.223 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.223 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.223 21:19:18 -- accel/accel.sh@20 -- # val= 00:16:29.223 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.223 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.223 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:29.223 21:19:18 -- accel/accel.sh@20 -- # val= 00:16:29.223 21:19:18 -- accel/accel.sh@21 -- # case "$var" in 00:16:29.223 21:19:18 -- accel/accel.sh@19 -- # IFS=: 00:16:29.223 21:19:18 -- accel/accel.sh@19 -- # read -r var val 00:16:30.608 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.608 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.608 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.608 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.608 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.608 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.608 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.608 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.608 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.608 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.608 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.608 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.608 21:19:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:30.608 21:19:19 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:16:30.608 21:19:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:30.608 00:16:30.608 real 0m1.403s 00:16:30.608 user 0m1.221s 00:16:30.608 sys 0m0.096s 00:16:30.608 21:19:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:30.608 21:19:19 -- common/autotest_common.sh@10 -- # set +x 00:16:30.608 ************************************ 00:16:30.608 END TEST accel_dif_verify 00:16:30.608 ************************************ 00:16:30.608 21:19:19 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:16:30.608 21:19:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:30.608 21:19:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:30.608 21:19:19 -- common/autotest_common.sh@10 -- # set +x 00:16:30.608 ************************************ 00:16:30.608 START TEST accel_dif_generate 00:16:30.608 ************************************ 00:16:30.608 21:19:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:16:30.608 21:19:19 -- accel/accel.sh@16 -- # local accel_opc 00:16:30.608 21:19:19 -- accel/accel.sh@17 -- # local accel_module 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.608 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.608 21:19:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:16:30.608 21:19:19 -- accel/accel.sh@12 -- # build_accel_config 00:16:30.608 21:19:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:16:30.608 21:19:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:30.608 21:19:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:30.608 21:19:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:30.608 21:19:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:30.608 21:19:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:30.608 21:19:19 -- accel/accel.sh@40 -- # local IFS=, 00:16:30.608 21:19:19 -- accel/accel.sh@41 -- # jq -r . 00:16:30.608 [2024-04-26 21:19:19.620285] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:30.608 [2024-04-26 21:19:19.620385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76894 ] 00:16:30.608 [2024-04-26 21:19:19.759036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.608 [2024-04-26 21:19:19.812420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.608 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.609 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.609 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.609 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.609 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.609 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.609 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.609 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.609 21:19:19 -- accel/accel.sh@20 -- # val=0x1 00:16:30.609 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.609 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.609 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.609 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val=dif_generate 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val='512 bytes' 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val='8 bytes' 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val=software 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@22 -- # accel_module=software 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val=32 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val=32 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val=1 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val=No 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:30.868 21:19:19 -- accel/accel.sh@20 -- # val= 00:16:30.868 21:19:19 -- accel/accel.sh@21 -- # case "$var" in 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # IFS=: 00:16:30.868 21:19:19 -- accel/accel.sh@19 -- # read -r var val 00:16:31.806 21:19:20 -- accel/accel.sh@20 -- # val= 00:16:31.806 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:16:31.806 21:19:20 -- accel/accel.sh@20 -- # val= 00:16:31.806 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:16:31.806 21:19:20 -- accel/accel.sh@20 -- # val= 00:16:31.806 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:16:31.806 21:19:20 -- accel/accel.sh@20 -- # val= 00:16:31.806 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:16:31.806 21:19:20 -- accel/accel.sh@20 -- # val= 00:16:31.806 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:16:31.806 21:19:20 -- accel/accel.sh@20 -- # val= 00:16:31.806 21:19:20 -- accel/accel.sh@21 -- # case "$var" in 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # IFS=: 00:16:31.806 21:19:20 -- accel/accel.sh@19 -- # read -r var val 00:16:31.806 21:19:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:31.806 21:19:20 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:16:31.806 21:19:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:31.806 00:16:31.806 real 0m1.404s 00:16:31.806 user 0m1.222s 00:16:31.806 sys 0m0.095s 00:16:31.806 21:19:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.806 21:19:20 -- common/autotest_common.sh@10 -- # set +x 00:16:31.806 ************************************ 00:16:31.806 END TEST accel_dif_generate 00:16:31.806 ************************************ 00:16:31.806 21:19:21 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:16:31.806 21:19:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:16:31.806 21:19:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.806 21:19:21 -- common/autotest_common.sh@10 -- # set +x 00:16:32.066 ************************************ 00:16:32.066 START TEST accel_dif_generate_copy 00:16:32.066 ************************************ 00:16:32.066 21:19:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:16:32.066 21:19:21 -- accel/accel.sh@16 -- # local accel_opc 00:16:32.066 21:19:21 -- accel/accel.sh@17 -- # local accel_module 00:16:32.066 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.066 21:19:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:16:32.066 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.067 21:19:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:16:32.067 21:19:21 -- accel/accel.sh@12 -- # build_accel_config 00:16:32.067 21:19:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:32.067 21:19:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:32.067 21:19:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:32.067 21:19:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:32.067 21:19:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:32.067 21:19:21 -- accel/accel.sh@40 -- # local IFS=, 00:16:32.067 21:19:21 -- accel/accel.sh@41 -- # jq -r . 00:16:32.067 [2024-04-26 21:19:21.176721] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:32.067 [2024-04-26 21:19:21.176888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76933 ] 00:16:32.067 [2024-04-26 21:19:21.317395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.327 [2024-04-26 21:19:21.369121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val= 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val= 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val=0x1 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val= 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val= 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val= 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val=software 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@22 -- # accel_module=software 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val=32 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val=32 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val=1 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val=No 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val= 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:32.327 21:19:21 -- accel/accel.sh@20 -- # val= 00:16:32.327 21:19:21 -- accel/accel.sh@21 -- # case "$var" in 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # IFS=: 00:16:32.327 21:19:21 -- accel/accel.sh@19 -- # read -r var val 00:16:33.706 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:33.706 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:33.706 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:33.706 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:33.706 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:33.706 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:33.706 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:33.706 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:33.706 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:33.706 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:33.706 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:33.706 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:33.706 21:19:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:33.706 21:19:22 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:16:33.706 21:19:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:33.706 00:16:33.706 real 0m1.403s 00:16:33.706 user 0m1.217s 00:16:33.706 sys 0m0.097s 00:16:33.706 21:19:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:33.706 21:19:22 -- common/autotest_common.sh@10 -- # set +x 00:16:33.706 ************************************ 00:16:33.706 END TEST accel_dif_generate_copy 00:16:33.706 ************************************ 00:16:33.706 21:19:22 -- accel/accel.sh@115 -- # [[ y == y ]] 00:16:33.706 21:19:22 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:33.706 21:19:22 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:33.706 21:19:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.706 21:19:22 -- common/autotest_common.sh@10 -- # set +x 00:16:33.706 ************************************ 00:16:33.706 START TEST accel_comp 00:16:33.706 ************************************ 00:16:33.706 21:19:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:33.706 21:19:22 -- accel/accel.sh@16 -- # local accel_opc 00:16:33.706 21:19:22 -- accel/accel.sh@17 -- # local accel_module 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:33.706 21:19:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:33.706 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:33.706 21:19:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:33.706 21:19:22 -- accel/accel.sh@12 -- # build_accel_config 00:16:33.706 21:19:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:33.706 21:19:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:33.706 21:19:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:33.706 21:19:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:33.706 21:19:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:33.706 21:19:22 -- accel/accel.sh@40 -- # local IFS=, 00:16:33.706 21:19:22 -- accel/accel.sh@41 -- # jq -r . 00:16:33.706 [2024-04-26 21:19:22.727108] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:33.706 [2024-04-26 21:19:22.727188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76971 ] 00:16:33.706 [2024-04-26 21:19:22.874064] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.706 [2024-04-26 21:19:22.928756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val=0x1 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val=compress 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@23 -- # accel_opc=compress 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val=software 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@22 -- # accel_module=software 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val=32 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val=32 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val=1 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val=No 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.000 21:19:22 -- accel/accel.sh@20 -- # val= 00:16:34.000 21:19:22 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # IFS=: 00:16:34.000 21:19:22 -- accel/accel.sh@19 -- # read -r var val 00:16:34.945 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:34.945 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:34.945 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:34.945 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:34.945 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:34.945 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:34.945 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:34.945 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:34.945 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:34.945 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:34.945 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:34.945 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:34.945 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:34.946 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:34.946 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:34.946 21:19:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:34.946 21:19:24 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:16:34.946 ************************************ 00:16:34.946 END TEST accel_comp 00:16:34.946 ************************************ 00:16:34.946 21:19:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:34.946 00:16:34.946 real 0m1.420s 00:16:34.946 user 0m1.228s 00:16:34.946 sys 0m0.105s 00:16:34.946 21:19:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:34.946 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:16:34.946 21:19:24 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:34.946 21:19:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:34.946 21:19:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:34.946 21:19:24 -- common/autotest_common.sh@10 -- # set +x 00:16:35.206 ************************************ 00:16:35.206 START TEST accel_decomp 00:16:35.206 ************************************ 00:16:35.206 21:19:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:35.206 21:19:24 -- accel/accel.sh@16 -- # local accel_opc 00:16:35.206 21:19:24 -- accel/accel.sh@17 -- # local accel_module 00:16:35.206 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.206 21:19:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:35.206 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.206 21:19:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:35.206 21:19:24 -- accel/accel.sh@12 -- # build_accel_config 00:16:35.206 21:19:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:35.206 21:19:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:35.206 21:19:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:35.206 21:19:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:35.206 21:19:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:35.206 21:19:24 -- accel/accel.sh@40 -- # local IFS=, 00:16:35.206 21:19:24 -- accel/accel.sh@41 -- # jq -r . 00:16:35.206 [2024-04-26 21:19:24.284466] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:35.206 [2024-04-26 21:19:24.284627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77010 ] 00:16:35.206 [2024-04-26 21:19:24.425652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.465 [2024-04-26 21:19:24.477804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val=0x1 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val=decompress 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val=software 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@22 -- # accel_module=software 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val=32 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val=32 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val=1 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val=Yes 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:35.465 21:19:24 -- accel/accel.sh@20 -- # val= 00:16:35.465 21:19:24 -- accel/accel.sh@21 -- # case "$var" in 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # IFS=: 00:16:35.465 21:19:24 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:25 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:25 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:25 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:25 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:25 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:25 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:25 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:36.843 21:19:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:36.843 21:19:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:36.843 00:16:36.843 real 0m1.407s 00:16:36.843 user 0m0.014s 00:16:36.843 sys 0m0.000s 00:16:36.843 21:19:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:36.843 21:19:25 -- common/autotest_common.sh@10 -- # set +x 00:16:36.843 ************************************ 00:16:36.843 END TEST accel_decomp 00:16:36.843 ************************************ 00:16:36.843 21:19:25 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:36.843 21:19:25 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:16:36.843 21:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:36.843 21:19:25 -- common/autotest_common.sh@10 -- # set +x 00:16:36.843 ************************************ 00:16:36.843 START TEST accel_decmop_full 00:16:36.843 ************************************ 00:16:36.843 21:19:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:36.843 21:19:25 -- accel/accel.sh@16 -- # local accel_opc 00:16:36.843 21:19:25 -- accel/accel.sh@17 -- # local accel_module 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:36.843 21:19:25 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:25 -- accel/accel.sh@12 -- # build_accel_config 00:16:36.843 21:19:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:16:36.843 21:19:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:36.843 21:19:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:36.843 21:19:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:36.843 21:19:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:36.843 21:19:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:36.843 21:19:25 -- accel/accel.sh@40 -- # local IFS=, 00:16:36.843 21:19:25 -- accel/accel.sh@41 -- # jq -r . 00:16:36.843 [2024-04-26 21:19:25.808085] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:36.843 [2024-04-26 21:19:25.808248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77048 ] 00:16:36.843 [2024-04-26 21:19:25.946956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.843 [2024-04-26 21:19:25.998317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val=0x1 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val=decompress 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val= 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val=software 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@22 -- # accel_module=software 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val=32 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val=32 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val=1 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.843 21:19:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:36.843 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.843 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.844 21:19:26 -- accel/accel.sh@20 -- # val=Yes 00:16:36.844 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.844 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.844 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.844 21:19:26 -- accel/accel.sh@20 -- # val= 00:16:36.844 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.844 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.844 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:36.844 21:19:26 -- accel/accel.sh@20 -- # val= 00:16:36.844 21:19:26 -- accel/accel.sh@21 -- # case "$var" in 00:16:36.844 21:19:26 -- accel/accel.sh@19 -- # IFS=: 00:16:36.844 21:19:26 -- accel/accel.sh@19 -- # read -r var val 00:16:38.223 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.223 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.223 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.223 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.223 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.223 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.223 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.223 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.223 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.223 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.223 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.223 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.223 21:19:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:38.223 21:19:27 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:38.223 21:19:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:38.223 00:16:38.223 real 0m1.414s 00:16:38.223 user 0m1.221s 00:16:38.223 sys 0m0.099s 00:16:38.223 21:19:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.223 21:19:27 -- common/autotest_common.sh@10 -- # set +x 00:16:38.223 ************************************ 00:16:38.223 END TEST accel_decmop_full 00:16:38.223 ************************************ 00:16:38.223 21:19:27 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:38.223 21:19:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:16:38.223 21:19:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.223 21:19:27 -- common/autotest_common.sh@10 -- # set +x 00:16:38.223 ************************************ 00:16:38.223 START TEST accel_decomp_mcore 00:16:38.223 ************************************ 00:16:38.223 21:19:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:38.223 21:19:27 -- accel/accel.sh@16 -- # local accel_opc 00:16:38.223 21:19:27 -- accel/accel.sh@17 -- # local accel_module 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.223 21:19:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:38.223 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.223 21:19:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:16:38.223 21:19:27 -- accel/accel.sh@12 -- # build_accel_config 00:16:38.223 21:19:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:38.223 21:19:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:38.223 21:19:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:38.223 21:19:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:38.223 21:19:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:38.223 21:19:27 -- accel/accel.sh@40 -- # local IFS=, 00:16:38.223 21:19:27 -- accel/accel.sh@41 -- # jq -r . 00:16:38.223 [2024-04-26 21:19:27.363926] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:38.223 [2024-04-26 21:19:27.364084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77087 ] 00:16:38.483 [2024-04-26 21:19:27.506134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.483 [2024-04-26 21:19:27.560682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.483 [2024-04-26 21:19:27.560898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.483 [2024-04-26 21:19:27.561006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.483 [2024-04-26 21:19:27.561010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val=0xf 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val=decompress 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val=software 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@22 -- # accel_module=software 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val=32 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val=32 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val=1 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val=Yes 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:38.483 21:19:27 -- accel/accel.sh@20 -- # val= 00:16:38.483 21:19:27 -- accel/accel.sh@21 -- # case "$var" in 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # IFS=: 00:16:38.483 21:19:27 -- accel/accel.sh@19 -- # read -r var val 00:16:39.860 21:19:28 -- accel/accel.sh@20 -- # val= 00:16:39.860 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.860 21:19:28 -- accel/accel.sh@20 -- # val= 00:16:39.860 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.860 21:19:28 -- accel/accel.sh@20 -- # val= 00:16:39.860 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.860 21:19:28 -- accel/accel.sh@20 -- # val= 00:16:39.860 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.860 21:19:28 -- accel/accel.sh@20 -- # val= 00:16:39.860 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.860 21:19:28 -- accel/accel.sh@20 -- # val= 00:16:39.860 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.860 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.861 21:19:28 -- accel/accel.sh@20 -- # val= 00:16:39.861 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.861 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.861 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.861 21:19:28 -- accel/accel.sh@20 -- # val= 00:16:39.861 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.861 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.861 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.861 21:19:28 -- accel/accel.sh@20 -- # val= 00:16:39.861 21:19:28 -- accel/accel.sh@21 -- # case "$var" in 00:16:39.861 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.861 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.861 21:19:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:39.861 21:19:28 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:39.861 21:19:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:39.861 00:16:39.861 real 0m1.425s 00:16:39.861 user 0m4.562s 00:16:39.861 sys 0m0.102s 00:16:39.861 21:19:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:39.861 ************************************ 00:16:39.861 END TEST accel_decomp_mcore 00:16:39.861 ************************************ 00:16:39.861 21:19:28 -- common/autotest_common.sh@10 -- # set +x 00:16:39.861 21:19:28 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:39.861 21:19:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:39.861 21:19:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.861 21:19:28 -- common/autotest_common.sh@10 -- # set +x 00:16:39.861 ************************************ 00:16:39.861 START TEST accel_decomp_full_mcore 00:16:39.861 ************************************ 00:16:39.861 21:19:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:39.861 21:19:28 -- accel/accel.sh@16 -- # local accel_opc 00:16:39.861 21:19:28 -- accel/accel.sh@17 -- # local accel_module 00:16:39.861 21:19:28 -- accel/accel.sh@19 -- # IFS=: 00:16:39.861 21:19:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:39.861 21:19:28 -- accel/accel.sh@19 -- # read -r var val 00:16:39.861 21:19:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:16:39.861 21:19:28 -- accel/accel.sh@12 -- # build_accel_config 00:16:39.861 21:19:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:39.861 21:19:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:39.861 21:19:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:39.861 21:19:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:39.861 21:19:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:39.861 21:19:28 -- accel/accel.sh@40 -- # local IFS=, 00:16:39.861 21:19:28 -- accel/accel.sh@41 -- # jq -r . 00:16:39.861 [2024-04-26 21:19:28.938176] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:39.861 [2024-04-26 21:19:28.938260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77129 ] 00:16:39.861 [2024-04-26 21:19:29.078500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.140 [2024-04-26 21:19:29.135016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.140 [2024-04-26 21:19:29.135129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.140 [2024-04-26 21:19:29.135223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.140 [2024-04-26 21:19:29.135226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.140 21:19:29 -- accel/accel.sh@20 -- # val= 00:16:40.140 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.140 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.140 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.140 21:19:29 -- accel/accel.sh@20 -- # val= 00:16:40.140 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.140 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.140 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.140 21:19:29 -- accel/accel.sh@20 -- # val= 00:16:40.140 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.140 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.140 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val=0xf 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val= 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val= 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val=decompress 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val= 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val=software 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@22 -- # accel_module=software 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val=32 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val=32 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val=1 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val=Yes 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val= 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:40.141 21:19:29 -- accel/accel.sh@20 -- # val= 00:16:40.141 21:19:29 -- accel/accel.sh@21 -- # case "$var" in 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # IFS=: 00:16:40.141 21:19:29 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.518 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.518 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.518 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.518 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.518 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.518 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.518 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.518 ************************************ 00:16:41.518 END TEST accel_decomp_full_mcore 00:16:41.518 ************************************ 00:16:41.518 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.518 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:41.518 21:19:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:41.518 21:19:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:41.518 00:16:41.518 real 0m1.441s 00:16:41.518 user 0m4.600s 00:16:41.518 sys 0m0.117s 00:16:41.518 21:19:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:41.518 21:19:30 -- common/autotest_common.sh@10 -- # set +x 00:16:41.518 21:19:30 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:41.518 21:19:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:16:41.518 21:19:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:41.518 21:19:30 -- common/autotest_common.sh@10 -- # set +x 00:16:41.518 ************************************ 00:16:41.518 START TEST accel_decomp_mthread 00:16:41.518 ************************************ 00:16:41.518 21:19:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:41.518 21:19:30 -- accel/accel.sh@16 -- # local accel_opc 00:16:41.518 21:19:30 -- accel/accel.sh@17 -- # local accel_module 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.518 21:19:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:41.518 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.518 21:19:30 -- accel/accel.sh@12 -- # build_accel_config 00:16:41.518 21:19:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:16:41.518 21:19:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:41.518 21:19:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:41.518 21:19:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:41.518 21:19:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:41.518 21:19:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:41.518 21:19:30 -- accel/accel.sh@40 -- # local IFS=, 00:16:41.518 21:19:30 -- accel/accel.sh@41 -- # jq -r . 00:16:41.518 [2024-04-26 21:19:30.527569] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:41.518 [2024-04-26 21:19:30.527660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77172 ] 00:16:41.518 [2024-04-26 21:19:30.666199] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.518 [2024-04-26 21:19:30.733435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val=0x1 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val=decompress 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val=software 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@22 -- # accel_module=software 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val=32 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val=32 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val=2 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val=Yes 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:41.777 21:19:30 -- accel/accel.sh@20 -- # val= 00:16:41.777 21:19:30 -- accel/accel.sh@21 -- # case "$var" in 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # IFS=: 00:16:41.777 21:19:30 -- accel/accel.sh@19 -- # read -r var val 00:16:42.713 21:19:31 -- accel/accel.sh@20 -- # val= 00:16:42.713 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:16:42.713 21:19:31 -- accel/accel.sh@20 -- # val= 00:16:42.713 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:16:42.713 21:19:31 -- accel/accel.sh@20 -- # val= 00:16:42.713 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:16:42.713 21:19:31 -- accel/accel.sh@20 -- # val= 00:16:42.713 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:16:42.713 21:19:31 -- accel/accel.sh@20 -- # val= 00:16:42.713 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:16:42.713 21:19:31 -- accel/accel.sh@20 -- # val= 00:16:42.713 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:16:42.713 21:19:31 -- accel/accel.sh@20 -- # val= 00:16:42.713 21:19:31 -- accel/accel.sh@21 -- # case "$var" in 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # IFS=: 00:16:42.713 21:19:31 -- accel/accel.sh@19 -- # read -r var val 00:16:42.713 21:19:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:42.713 21:19:31 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:42.713 21:19:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:42.713 00:16:42.713 real 0m1.426s 00:16:42.713 user 0m1.240s 00:16:42.713 sys 0m0.104s 00:16:42.713 21:19:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:42.713 21:19:31 -- common/autotest_common.sh@10 -- # set +x 00:16:42.713 ************************************ 00:16:42.713 END TEST accel_decomp_mthread 00:16:42.713 ************************************ 00:16:42.972 21:19:31 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:42.972 21:19:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:16:42.972 21:19:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.972 21:19:31 -- common/autotest_common.sh@10 -- # set +x 00:16:42.972 ************************************ 00:16:42.972 START TEST accel_deomp_full_mthread 00:16:42.972 ************************************ 00:16:42.972 21:19:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:42.972 21:19:32 -- accel/accel.sh@16 -- # local accel_opc 00:16:42.972 21:19:32 -- accel/accel.sh@17 -- # local accel_module 00:16:42.972 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:42.972 21:19:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:42.972 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:42.972 21:19:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:16:42.972 21:19:32 -- accel/accel.sh@12 -- # build_accel_config 00:16:42.972 21:19:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:42.972 21:19:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:42.972 21:19:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:42.972 21:19:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:42.972 21:19:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:42.972 21:19:32 -- accel/accel.sh@40 -- # local IFS=, 00:16:42.972 21:19:32 -- accel/accel.sh@41 -- # jq -r . 00:16:42.972 [2024-04-26 21:19:32.091949] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:42.972 [2024-04-26 21:19:32.092078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77205 ] 00:16:43.232 [2024-04-26 21:19:32.233260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.232 [2024-04-26 21:19:32.284026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val= 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val= 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val= 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val=0x1 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val= 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val= 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val=decompress 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.232 21:19:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val='111250 bytes' 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val= 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.232 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.232 21:19:32 -- accel/accel.sh@20 -- # val=software 00:16:43.232 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.233 21:19:32 -- accel/accel.sh@22 -- # accel_module=software 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.233 21:19:32 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:16:43.233 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.233 21:19:32 -- accel/accel.sh@20 -- # val=32 00:16:43.233 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.233 21:19:32 -- accel/accel.sh@20 -- # val=32 00:16:43.233 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.233 21:19:32 -- accel/accel.sh@20 -- # val=2 00:16:43.233 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.233 21:19:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:43.233 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.233 21:19:32 -- accel/accel.sh@20 -- # val=Yes 00:16:43.233 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.233 21:19:32 -- accel/accel.sh@20 -- # val= 00:16:43.233 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:43.233 21:19:32 -- accel/accel.sh@20 -- # val= 00:16:43.233 21:19:32 -- accel/accel.sh@21 -- # case "$var" in 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # IFS=: 00:16:43.233 21:19:32 -- accel/accel.sh@19 -- # read -r var val 00:16:44.612 21:19:33 -- accel/accel.sh@20 -- # val= 00:16:44.612 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:16:44.612 21:19:33 -- accel/accel.sh@20 -- # val= 00:16:44.612 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:16:44.612 21:19:33 -- accel/accel.sh@20 -- # val= 00:16:44.612 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:16:44.612 21:19:33 -- accel/accel.sh@20 -- # val= 00:16:44.612 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:16:44.612 21:19:33 -- accel/accel.sh@20 -- # val= 00:16:44.612 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:16:44.612 21:19:33 -- accel/accel.sh@20 -- # val= 00:16:44.612 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:16:44.612 21:19:33 -- accel/accel.sh@20 -- # val= 00:16:44.612 21:19:33 -- accel/accel.sh@21 -- # case "$var" in 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # IFS=: 00:16:44.612 21:19:33 -- accel/accel.sh@19 -- # read -r var val 00:16:44.612 ************************************ 00:16:44.612 END TEST accel_deomp_full_mthread 00:16:44.612 ************************************ 00:16:44.612 21:19:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:44.612 21:19:33 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:16:44.612 21:19:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:44.612 00:16:44.612 real 0m1.441s 00:16:44.612 user 0m1.256s 00:16:44.612 sys 0m0.100s 00:16:44.612 21:19:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:44.612 21:19:33 -- common/autotest_common.sh@10 -- # set +x 00:16:44.612 21:19:33 -- accel/accel.sh@124 -- # [[ n == y ]] 00:16:44.612 21:19:33 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:44.612 21:19:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:44.612 21:19:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.612 21:19:33 -- common/autotest_common.sh@10 -- # set +x 00:16:44.612 21:19:33 -- accel/accel.sh@137 -- # build_accel_config 00:16:44.612 21:19:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:44.612 21:19:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:44.612 21:19:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:44.612 21:19:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:44.612 21:19:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:44.612 21:19:33 -- accel/accel.sh@40 -- # local IFS=, 00:16:44.612 21:19:33 -- accel/accel.sh@41 -- # jq -r . 00:16:44.612 ************************************ 00:16:44.612 START TEST accel_dif_functional_tests 00:16:44.612 ************************************ 00:16:44.612 21:19:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:16:44.612 [2024-04-26 21:19:33.677806] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:44.612 [2024-04-26 21:19:33.677882] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77252 ] 00:16:44.612 [2024-04-26 21:19:33.816700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:44.871 [2024-04-26 21:19:33.869388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.871 [2024-04-26 21:19:33.869449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.871 [2024-04-26 21:19:33.869452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.871 00:16:44.871 00:16:44.871 CUnit - A unit testing framework for C - Version 2.1-3 00:16:44.871 http://cunit.sourceforge.net/ 00:16:44.871 00:16:44.871 00:16:44.871 Suite: accel_dif 00:16:44.871 Test: verify: DIF generated, GUARD check ...passed 00:16:44.871 Test: verify: DIF generated, APPTAG check ...passed 00:16:44.871 Test: verify: DIF generated, REFTAG check ...passed 00:16:44.871 Test: verify: DIF not generated, GUARD check ...passed 00:16:44.871 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 21:19:33.936275] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:44.871 [2024-04-26 21:19:33.936368] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:16:44.871 [2024-04-26 21:19:33.936402] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:44.871 passed 00:16:44.871 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 21:19:33.936448] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:16:44.871 [2024-04-26 21:19:33.936471] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:44.871 passed 00:16:44.871 Test: verify: APPTAG correct, APPTAG check ...[2024-04-26 21:19:33.936500] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:16:44.871 passed 00:16:44.871 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:16:44.871 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-04-26 21:19:33.936582] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:16:44.871 passed 00:16:44.871 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:16:44.871 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:16:44.871 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 21:19:33.936758] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:16:44.871 passed 00:16:44.871 Test: generate copy: DIF generated, GUARD check ...passed 00:16:44.871 Test: generate copy: DIF generated, APTTAG check ...passed 00:16:44.871 Test: generate copy: DIF generated, REFTAG check ...passed 00:16:44.871 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:16:44.871 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:16:44.871 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:16:44.871 Test: generate copy: iovecs-len validate ...[2024-04-26 21:19:33.937059] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:16:44.871 passed 00:16:44.871 Test: generate copy: buffer alignment validate ...passed 00:16:44.871 00:16:44.871 Run Summary: Type Total Ran Passed Failed Inactive 00:16:44.871 suites 1 1 n/a 0 0 00:16:44.871 tests 20 20 20 0 0 00:16:44.871 asserts 204 204 204 0 n/a 00:16:44.871 00:16:44.871 Elapsed time = 0.003 seconds 00:16:44.871 00:16:44.871 real 0m0.486s 00:16:44.871 user 0m0.569s 00:16:44.871 sys 0m0.126s 00:16:44.871 ************************************ 00:16:44.871 END TEST accel_dif_functional_tests 00:16:44.871 ************************************ 00:16:44.871 21:19:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:44.871 21:19:34 -- common/autotest_common.sh@10 -- # set +x 00:16:45.130 00:16:45.130 real 0m34.483s 00:16:45.130 user 0m35.227s 00:16:45.130 sys 0m4.545s 00:16:45.130 ************************************ 00:16:45.130 END TEST accel 00:16:45.130 ************************************ 00:16:45.130 21:19:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:45.130 21:19:34 -- common/autotest_common.sh@10 -- # set +x 00:16:45.130 21:19:34 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:45.130 21:19:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:45.130 21:19:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:45.130 21:19:34 -- common/autotest_common.sh@10 -- # set +x 00:16:45.130 ************************************ 00:16:45.130 START TEST accel_rpc 00:16:45.130 ************************************ 00:16:45.130 21:19:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:16:45.387 * Looking for test storage... 00:16:45.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:45.387 21:19:34 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:45.387 21:19:34 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77327 00:16:45.387 21:19:34 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:45.387 21:19:34 -- accel/accel_rpc.sh@15 -- # waitforlisten 77327 00:16:45.387 21:19:34 -- common/autotest_common.sh@817 -- # '[' -z 77327 ']' 00:16:45.387 21:19:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.387 21:19:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:45.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.387 21:19:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.388 21:19:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:45.388 21:19:34 -- common/autotest_common.sh@10 -- # set +x 00:16:45.388 [2024-04-26 21:19:34.495360] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:45.388 [2024-04-26 21:19:34.495437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77327 ] 00:16:45.388 [2024-04-26 21:19:34.635693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.644 [2024-04-26 21:19:34.688589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.211 21:19:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:46.211 21:19:35 -- common/autotest_common.sh@850 -- # return 0 00:16:46.211 21:19:35 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:16:46.211 21:19:35 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:16:46.211 21:19:35 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:16:46.211 21:19:35 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:16:46.211 21:19:35 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:16:46.211 21:19:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:46.211 21:19:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.211 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:16:46.469 ************************************ 00:16:46.469 START TEST accel_assign_opcode 00:16:46.469 ************************************ 00:16:46.469 21:19:35 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:16:46.469 21:19:35 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:16:46.469 21:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.469 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:16:46.469 [2024-04-26 21:19:35.499526] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:16:46.469 21:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.469 21:19:35 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:16:46.469 21:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.469 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:16:46.469 [2024-04-26 21:19:35.511484] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:16:46.469 21:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.469 21:19:35 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:16:46.469 21:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.469 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:16:46.469 21:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.469 21:19:35 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:16:46.469 21:19:35 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:16:46.469 21:19:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.469 21:19:35 -- accel/accel_rpc.sh@42 -- # grep software 00:16:46.469 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:16:46.469 21:19:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.728 software 00:16:46.728 00:16:46.728 real 0m0.254s 00:16:46.728 user 0m0.049s 00:16:46.728 ************************************ 00:16:46.728 END TEST accel_assign_opcode 00:16:46.728 ************************************ 00:16:46.728 sys 0m0.016s 00:16:46.728 21:19:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:46.728 21:19:35 -- common/autotest_common.sh@10 -- # set +x 00:16:46.728 21:19:35 -- accel/accel_rpc.sh@55 -- # killprocess 77327 00:16:46.728 21:19:35 -- common/autotest_common.sh@936 -- # '[' -z 77327 ']' 00:16:46.728 21:19:35 -- common/autotest_common.sh@940 -- # kill -0 77327 00:16:46.728 21:19:35 -- common/autotest_common.sh@941 -- # uname 00:16:46.728 21:19:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:46.728 21:19:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77327 00:16:46.728 killing process with pid 77327 00:16:46.728 21:19:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:46.728 21:19:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:46.728 21:19:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77327' 00:16:46.728 21:19:35 -- common/autotest_common.sh@955 -- # kill 77327 00:16:46.728 21:19:35 -- common/autotest_common.sh@960 -- # wait 77327 00:16:46.987 00:16:46.987 real 0m1.831s 00:16:46.987 user 0m1.916s 00:16:46.987 sys 0m0.488s 00:16:46.987 ************************************ 00:16:46.987 END TEST accel_rpc 00:16:46.987 ************************************ 00:16:46.987 21:19:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:46.987 21:19:36 -- common/autotest_common.sh@10 -- # set +x 00:16:46.987 21:19:36 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:46.987 21:19:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:46.987 21:19:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.987 21:19:36 -- common/autotest_common.sh@10 -- # set +x 00:16:47.246 ************************************ 00:16:47.246 START TEST app_cmdline 00:16:47.246 ************************************ 00:16:47.246 21:19:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:47.246 * Looking for test storage... 00:16:47.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:47.246 21:19:36 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:16:47.246 21:19:36 -- app/cmdline.sh@17 -- # spdk_tgt_pid=77447 00:16:47.246 21:19:36 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:16:47.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.246 21:19:36 -- app/cmdline.sh@18 -- # waitforlisten 77447 00:16:47.246 21:19:36 -- common/autotest_common.sh@817 -- # '[' -z 77447 ']' 00:16:47.246 21:19:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.246 21:19:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.246 21:19:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.246 21:19:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.246 21:19:36 -- common/autotest_common.sh@10 -- # set +x 00:16:47.247 [2024-04-26 21:19:36.465643] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:47.247 [2024-04-26 21:19:36.465713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77447 ] 00:16:47.506 [2024-04-26 21:19:36.603485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.506 [2024-04-26 21:19:36.654410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.471 21:19:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.471 21:19:37 -- common/autotest_common.sh@850 -- # return 0 00:16:48.471 21:19:37 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:16:48.471 { 00:16:48.471 "fields": { 00:16:48.471 "commit": "8571999d8", 00:16:48.471 "major": 24, 00:16:48.471 "minor": 5, 00:16:48.471 "patch": 0, 00:16:48.471 "suffix": "-pre" 00:16:48.471 }, 00:16:48.471 "version": "SPDK v24.05-pre git sha1 8571999d8" 00:16:48.471 } 00:16:48.472 21:19:37 -- app/cmdline.sh@22 -- # expected_methods=() 00:16:48.472 21:19:37 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:16:48.472 21:19:37 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:16:48.472 21:19:37 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:16:48.472 21:19:37 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:16:48.472 21:19:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.472 21:19:37 -- common/autotest_common.sh@10 -- # set +x 00:16:48.472 21:19:37 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:16:48.472 21:19:37 -- app/cmdline.sh@26 -- # sort 00:16:48.472 21:19:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.472 21:19:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:16:48.472 21:19:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:16:48.472 21:19:37 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:48.472 21:19:37 -- common/autotest_common.sh@638 -- # local es=0 00:16:48.472 21:19:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:48.472 21:19:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.472 21:19:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:48.472 21:19:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.472 21:19:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:48.472 21:19:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.472 21:19:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:48.472 21:19:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.472 21:19:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:48.472 21:19:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:48.732 2024/04/26 21:19:37 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:16:48.732 request: 00:16:48.732 { 00:16:48.732 "method": "env_dpdk_get_mem_stats", 00:16:48.732 "params": {} 00:16:48.732 } 00:16:48.732 Got JSON-RPC error response 00:16:48.732 GoRPCClient: error on JSON-RPC call 00:16:48.732 21:19:37 -- common/autotest_common.sh@641 -- # es=1 00:16:48.732 21:19:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:48.732 21:19:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:48.732 21:19:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:48.732 21:19:37 -- app/cmdline.sh@1 -- # killprocess 77447 00:16:48.732 21:19:37 -- common/autotest_common.sh@936 -- # '[' -z 77447 ']' 00:16:48.732 21:19:37 -- common/autotest_common.sh@940 -- # kill -0 77447 00:16:48.732 21:19:37 -- common/autotest_common.sh@941 -- # uname 00:16:48.732 21:19:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.732 21:19:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77447 00:16:48.732 killing process with pid 77447 00:16:48.732 21:19:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:48.732 21:19:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:48.732 21:19:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77447' 00:16:48.732 21:19:37 -- common/autotest_common.sh@955 -- # kill 77447 00:16:48.732 21:19:37 -- common/autotest_common.sh@960 -- # wait 77447 00:16:48.991 00:16:48.991 real 0m1.943s 00:16:48.991 user 0m2.355s 00:16:48.991 sys 0m0.476s 00:16:48.991 21:19:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:48.991 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:48.992 ************************************ 00:16:48.992 END TEST app_cmdline 00:16:48.992 ************************************ 00:16:49.251 21:19:38 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:49.251 21:19:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:49.251 21:19:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.251 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:49.251 ************************************ 00:16:49.251 START TEST version 00:16:49.251 ************************************ 00:16:49.251 21:19:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:49.251 * Looking for test storage... 00:16:49.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:49.251 21:19:38 -- app/version.sh@17 -- # get_header_version major 00:16:49.251 21:19:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:49.251 21:19:38 -- app/version.sh@14 -- # cut -f2 00:16:49.251 21:19:38 -- app/version.sh@14 -- # tr -d '"' 00:16:49.251 21:19:38 -- app/version.sh@17 -- # major=24 00:16:49.251 21:19:38 -- app/version.sh@18 -- # get_header_version minor 00:16:49.251 21:19:38 -- app/version.sh@14 -- # cut -f2 00:16:49.251 21:19:38 -- app/version.sh@14 -- # tr -d '"' 00:16:49.251 21:19:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:49.251 21:19:38 -- app/version.sh@18 -- # minor=5 00:16:49.510 21:19:38 -- app/version.sh@19 -- # get_header_version patch 00:16:49.510 21:19:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:49.510 21:19:38 -- app/version.sh@14 -- # tr -d '"' 00:16:49.510 21:19:38 -- app/version.sh@14 -- # cut -f2 00:16:49.510 21:19:38 -- app/version.sh@19 -- # patch=0 00:16:49.510 21:19:38 -- app/version.sh@20 -- # get_header_version suffix 00:16:49.510 21:19:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:49.510 21:19:38 -- app/version.sh@14 -- # cut -f2 00:16:49.510 21:19:38 -- app/version.sh@14 -- # tr -d '"' 00:16:49.510 21:19:38 -- app/version.sh@20 -- # suffix=-pre 00:16:49.510 21:19:38 -- app/version.sh@22 -- # version=24.5 00:16:49.511 21:19:38 -- app/version.sh@25 -- # (( patch != 0 )) 00:16:49.511 21:19:38 -- app/version.sh@28 -- # version=24.5rc0 00:16:49.511 21:19:38 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:49.511 21:19:38 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:16:49.511 21:19:38 -- app/version.sh@30 -- # py_version=24.5rc0 00:16:49.511 21:19:38 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:16:49.511 00:16:49.511 real 0m0.207s 00:16:49.511 user 0m0.116s 00:16:49.511 sys 0m0.132s 00:16:49.511 21:19:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:49.511 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:49.511 ************************************ 00:16:49.511 END TEST version 00:16:49.511 ************************************ 00:16:49.511 21:19:38 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:16:49.511 21:19:38 -- spdk/autotest.sh@194 -- # uname -s 00:16:49.511 21:19:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:49.511 21:19:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:49.511 21:19:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:49.511 21:19:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:49.511 21:19:38 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:16:49.511 21:19:38 -- spdk/autotest.sh@258 -- # timing_exit lib 00:16:49.511 21:19:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:49.511 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:49.511 21:19:38 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:16:49.511 21:19:38 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:16:49.511 21:19:38 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:16:49.511 21:19:38 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:16:49.511 21:19:38 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:16:49.511 21:19:38 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:16:49.511 21:19:38 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:49.511 21:19:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.511 21:19:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.511 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:49.511 ************************************ 00:16:49.511 START TEST nvmf_tcp 00:16:49.511 ************************************ 00:16:49.511 21:19:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:49.770 * Looking for test storage... 00:16:49.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:49.770 21:19:38 -- nvmf/nvmf.sh@10 -- # uname -s 00:16:49.770 21:19:38 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:16:49.770 21:19:38 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:49.770 21:19:38 -- nvmf/common.sh@7 -- # uname -s 00:16:49.770 21:19:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.770 21:19:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.770 21:19:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.770 21:19:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.771 21:19:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.771 21:19:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.771 21:19:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.771 21:19:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.771 21:19:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.771 21:19:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.771 21:19:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:16:49.771 21:19:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:16:49.771 21:19:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.771 21:19:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.771 21:19:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:49.771 21:19:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.771 21:19:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.771 21:19:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.771 21:19:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.771 21:19:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.771 21:19:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.771 21:19:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.771 21:19:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.771 21:19:38 -- paths/export.sh@5 -- # export PATH 00:16:49.771 21:19:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.771 21:19:38 -- nvmf/common.sh@47 -- # : 0 00:16:49.771 21:19:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.771 21:19:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.771 21:19:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.771 21:19:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.771 21:19:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.771 21:19:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.771 21:19:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.771 21:19:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.771 21:19:38 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:49.771 21:19:38 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:16:49.771 21:19:38 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:16:49.771 21:19:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:49.771 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:49.771 21:19:38 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:16:49.771 21:19:38 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:49.771 21:19:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.771 21:19:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.771 21:19:38 -- common/autotest_common.sh@10 -- # set +x 00:16:49.771 ************************************ 00:16:49.771 START TEST nvmf_example 00:16:49.771 ************************************ 00:16:49.771 21:19:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:16:50.031 * Looking for test storage... 00:16:50.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:50.031 21:19:39 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.031 21:19:39 -- nvmf/common.sh@7 -- # uname -s 00:16:50.031 21:19:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.031 21:19:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.031 21:19:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.031 21:19:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.031 21:19:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.031 21:19:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.031 21:19:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.031 21:19:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.031 21:19:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.031 21:19:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.031 21:19:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:16:50.031 21:19:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:16:50.031 21:19:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.031 21:19:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.031 21:19:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.031 21:19:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.031 21:19:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.031 21:19:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.031 21:19:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.031 21:19:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.031 21:19:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.031 21:19:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.031 21:19:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.031 21:19:39 -- paths/export.sh@5 -- # export PATH 00:16:50.031 21:19:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.031 21:19:39 -- nvmf/common.sh@47 -- # : 0 00:16:50.031 21:19:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.031 21:19:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.031 21:19:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.031 21:19:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.031 21:19:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.031 21:19:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.031 21:19:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.031 21:19:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.031 21:19:39 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:16:50.031 21:19:39 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:16:50.031 21:19:39 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:16:50.031 21:19:39 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:16:50.031 21:19:39 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:16:50.031 21:19:39 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:16:50.031 21:19:39 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:16:50.031 21:19:39 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:16:50.031 21:19:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:50.031 21:19:39 -- common/autotest_common.sh@10 -- # set +x 00:16:50.031 21:19:39 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:16:50.031 21:19:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:50.031 21:19:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.031 21:19:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:50.031 21:19:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:50.031 21:19:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:50.031 21:19:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.031 21:19:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.031 21:19:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.031 21:19:39 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:50.031 21:19:39 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:50.031 21:19:39 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:50.031 21:19:39 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:50.031 21:19:39 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:50.031 21:19:39 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:50.031 21:19:39 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.031 21:19:39 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.031 21:19:39 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:50.031 21:19:39 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:50.031 21:19:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.031 21:19:39 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.031 21:19:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.031 21:19:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.031 21:19:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.031 21:19:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.031 21:19:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.031 21:19:39 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.031 21:19:39 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:50.031 Cannot find device "nvmf_init_br" 00:16:50.031 21:19:39 -- nvmf/common.sh@154 -- # true 00:16:50.031 21:19:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:50.031 Cannot find device "nvmf_tgt_br" 00:16:50.031 21:19:39 -- nvmf/common.sh@155 -- # true 00:16:50.031 21:19:39 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.031 Cannot find device "nvmf_tgt_br2" 00:16:50.031 21:19:39 -- nvmf/common.sh@156 -- # true 00:16:50.031 21:19:39 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:50.031 Cannot find device "nvmf_init_br" 00:16:50.031 21:19:39 -- nvmf/common.sh@157 -- # true 00:16:50.031 21:19:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:50.031 Cannot find device "nvmf_tgt_br" 00:16:50.031 21:19:39 -- nvmf/common.sh@158 -- # true 00:16:50.031 21:19:39 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:50.031 Cannot find device "nvmf_tgt_br2" 00:16:50.031 21:19:39 -- nvmf/common.sh@159 -- # true 00:16:50.031 21:19:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:50.031 Cannot find device "nvmf_br" 00:16:50.031 21:19:39 -- nvmf/common.sh@160 -- # true 00:16:50.031 21:19:39 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:50.031 Cannot find device "nvmf_init_if" 00:16:50.031 21:19:39 -- nvmf/common.sh@161 -- # true 00:16:50.031 21:19:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.031 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.031 21:19:39 -- nvmf/common.sh@162 -- # true 00:16:50.290 21:19:39 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.290 21:19:39 -- nvmf/common.sh@163 -- # true 00:16:50.290 21:19:39 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.290 21:19:39 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.290 21:19:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.290 21:19:39 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.290 21:19:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.290 21:19:39 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.290 21:19:39 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.290 21:19:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.290 21:19:39 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:50.290 21:19:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:50.290 21:19:39 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:50.290 21:19:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:50.290 21:19:39 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:50.290 21:19:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:50.290 21:19:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:50.290 21:19:39 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:50.290 21:19:39 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:50.290 21:19:39 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:50.290 21:19:39 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:50.290 21:19:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:50.290 21:19:39 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:50.290 21:19:39 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:50.291 21:19:39 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:50.291 21:19:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:50.291 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.291 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:16:50.291 00:16:50.291 --- 10.0.0.2 ping statistics --- 00:16:50.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.291 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:50.291 21:19:39 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:50.291 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:50.291 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:16:50.291 00:16:50.291 --- 10.0.0.3 ping statistics --- 00:16:50.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.291 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:50.291 21:19:39 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:50.291 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.291 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:16:50.291 00:16:50.291 --- 10.0.0.1 ping statistics --- 00:16:50.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.291 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:50.291 21:19:39 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.291 21:19:39 -- nvmf/common.sh@422 -- # return 0 00:16:50.291 21:19:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:50.291 21:19:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.291 21:19:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:50.291 21:19:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:50.291 21:19:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.291 21:19:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:50.291 21:19:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:50.550 21:19:39 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:16:50.550 21:19:39 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:16:50.550 21:19:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:50.550 21:19:39 -- common/autotest_common.sh@10 -- # set +x 00:16:50.550 21:19:39 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:16:50.550 21:19:39 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:16:50.550 21:19:39 -- target/nvmf_example.sh@34 -- # nvmfpid=77813 00:16:50.550 21:19:39 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:16:50.550 21:19:39 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:50.550 21:19:39 -- target/nvmf_example.sh@36 -- # waitforlisten 77813 00:16:50.550 21:19:39 -- common/autotest_common.sh@817 -- # '[' -z 77813 ']' 00:16:50.550 21:19:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.550 21:19:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:50.550 21:19:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.550 21:19:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:50.550 21:19:39 -- common/autotest_common.sh@10 -- # set +x 00:16:51.484 21:19:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.484 21:19:40 -- common/autotest_common.sh@850 -- # return 0 00:16:51.484 21:19:40 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:16:51.484 21:19:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:51.484 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:16:51.484 21:19:40 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.485 21:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.485 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:16:51.485 21:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.485 21:19:40 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:16:51.485 21:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.485 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:16:51.485 21:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.485 21:19:40 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:16:51.485 21:19:40 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:51.485 21:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.485 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:16:51.485 21:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.485 21:19:40 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:16:51.485 21:19:40 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.485 21:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.485 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:16:51.485 21:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.485 21:19:40 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.485 21:19:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.485 21:19:40 -- common/autotest_common.sh@10 -- # set +x 00:16:51.485 21:19:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.485 21:19:40 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:51.485 21:19:40 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:03.702 Initializing NVMe Controllers 00:17:03.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:03.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:03.702 Initialization complete. Launching workers. 00:17:03.702 ======================================================== 00:17:03.702 Latency(us) 00:17:03.702 Device Information : IOPS MiB/s Average min max 00:17:03.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15404.69 60.17 4154.49 655.52 23418.79 00:17:03.702 ======================================================== 00:17:03.702 Total : 15404.69 60.17 4154.49 655.52 23418.79 00:17:03.702 00:17:03.702 21:19:50 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:17:03.702 21:19:50 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:17:03.702 21:19:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:03.702 21:19:50 -- nvmf/common.sh@117 -- # sync 00:17:03.702 21:19:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.702 21:19:50 -- nvmf/common.sh@120 -- # set +e 00:17:03.702 21:19:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.702 21:19:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.702 rmmod nvme_tcp 00:17:03.702 rmmod nvme_fabrics 00:17:03.702 rmmod nvme_keyring 00:17:03.702 21:19:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.702 21:19:51 -- nvmf/common.sh@124 -- # set -e 00:17:03.702 21:19:51 -- nvmf/common.sh@125 -- # return 0 00:17:03.702 21:19:51 -- nvmf/common.sh@478 -- # '[' -n 77813 ']' 00:17:03.702 21:19:51 -- nvmf/common.sh@479 -- # killprocess 77813 00:17:03.702 21:19:51 -- common/autotest_common.sh@936 -- # '[' -z 77813 ']' 00:17:03.702 21:19:51 -- common/autotest_common.sh@940 -- # kill -0 77813 00:17:03.702 21:19:51 -- common/autotest_common.sh@941 -- # uname 00:17:03.702 21:19:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.702 21:19:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77813 00:17:03.702 21:19:51 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:17:03.702 21:19:51 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:17:03.702 killing process with pid 77813 00:17:03.702 21:19:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77813' 00:17:03.702 21:19:51 -- common/autotest_common.sh@955 -- # kill 77813 00:17:03.702 21:19:51 -- common/autotest_common.sh@960 -- # wait 77813 00:17:03.702 nvmf threads initialize successfully 00:17:03.702 bdev subsystem init successfully 00:17:03.702 created a nvmf target service 00:17:03.702 create targets's poll groups done 00:17:03.702 all subsystems of target started 00:17:03.702 nvmf target is running 00:17:03.702 all subsystems of target stopped 00:17:03.702 destroy targets's poll groups done 00:17:03.702 destroyed the nvmf target service 00:17:03.702 bdev subsystem finish successfully 00:17:03.702 nvmf threads destroy successfully 00:17:03.702 21:19:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:03.702 21:19:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:03.702 21:19:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:03.702 21:19:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.702 21:19:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.702 21:19:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.702 21:19:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.702 21:19:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.702 21:19:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:03.702 21:19:51 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:17:03.702 21:19:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:03.702 21:19:51 -- common/autotest_common.sh@10 -- # set +x 00:17:03.702 00:17:03.702 real 0m12.311s 00:17:03.702 user 0m44.720s 00:17:03.702 sys 0m1.596s 00:17:03.702 21:19:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:03.702 21:19:51 -- common/autotest_common.sh@10 -- # set +x 00:17:03.702 ************************************ 00:17:03.702 END TEST nvmf_example 00:17:03.702 ************************************ 00:17:03.702 21:19:51 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:17:03.702 21:19:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:03.702 21:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.702 21:19:51 -- common/autotest_common.sh@10 -- # set +x 00:17:03.702 ************************************ 00:17:03.702 START TEST nvmf_filesystem 00:17:03.702 ************************************ 00:17:03.702 21:19:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:17:03.702 * Looking for test storage... 00:17:03.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.702 21:19:51 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:17:03.702 21:19:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:17:03.702 21:19:51 -- common/autotest_common.sh@34 -- # set -e 00:17:03.702 21:19:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:17:03.702 21:19:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:17:03.702 21:19:51 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:17:03.702 21:19:51 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:17:03.702 21:19:51 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:17:03.702 21:19:51 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:17:03.702 21:19:51 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:17:03.702 21:19:51 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:17:03.702 21:19:51 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:17:03.702 21:19:51 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:17:03.702 21:19:51 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:17:03.702 21:19:51 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:17:03.702 21:19:51 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:17:03.702 21:19:51 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:17:03.702 21:19:51 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:17:03.702 21:19:51 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:17:03.702 21:19:51 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:17:03.702 21:19:51 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:17:03.702 21:19:51 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:17:03.702 21:19:51 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:17:03.702 21:19:51 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:17:03.702 21:19:51 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:17:03.702 21:19:51 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:17:03.702 21:19:51 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:03.702 21:19:51 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:17:03.702 21:19:51 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:17:03.702 21:19:51 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:17:03.702 21:19:51 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:17:03.702 21:19:51 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:17:03.702 21:19:51 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:17:03.702 21:19:51 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:17:03.702 21:19:51 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:17:03.702 21:19:51 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:17:03.702 21:19:51 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:17:03.702 21:19:51 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:17:03.702 21:19:51 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:17:03.702 21:19:51 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:17:03.702 21:19:51 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:17:03.702 21:19:51 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:17:03.702 21:19:51 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:17:03.702 21:19:51 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:17:03.702 21:19:51 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:17:03.702 21:19:51 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:17:03.703 21:19:51 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:17:03.703 21:19:51 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:17:03.703 21:19:51 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:17:03.703 21:19:51 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:17:03.703 21:19:51 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:17:03.703 21:19:51 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:17:03.703 21:19:51 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:17:03.703 21:19:51 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:17:03.703 21:19:51 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:17:03.703 21:19:51 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:17:03.703 21:19:51 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:17:03.703 21:19:51 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:17:03.703 21:19:51 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:17:03.703 21:19:51 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:17:03.703 21:19:51 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:17:03.703 21:19:51 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:17:03.703 21:19:51 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:17:03.703 21:19:51 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:17:03.703 21:19:51 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:17:03.703 21:19:51 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:17:03.703 21:19:51 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:17:03.703 21:19:51 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:17:03.703 21:19:51 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:17:03.703 21:19:51 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:17:03.703 21:19:51 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:17:03.703 21:19:51 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:17:03.703 21:19:51 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:17:03.703 21:19:51 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:17:03.703 21:19:51 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:17:03.703 21:19:51 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:17:03.703 21:19:51 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:17:03.703 21:19:51 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:17:03.703 21:19:51 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:17:03.703 21:19:51 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:17:03.703 21:19:51 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:17:03.703 21:19:51 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:17:03.703 21:19:51 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:17:03.703 21:19:51 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:17:03.703 21:19:51 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:17:03.703 21:19:51 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:17:03.703 21:19:51 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:17:03.703 21:19:51 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:17:03.703 21:19:51 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:17:03.703 21:19:51 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:17:03.703 21:19:51 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:03.703 21:19:51 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:03.703 21:19:51 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:17:03.703 21:19:51 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:17:03.703 21:19:51 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:17:03.703 21:19:51 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:17:03.703 21:19:51 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:17:03.703 21:19:51 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:17:03.703 21:19:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:17:03.703 21:19:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:17:03.703 21:19:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:17:03.703 21:19:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:17:03.703 21:19:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:17:03.703 21:19:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:17:03.703 21:19:51 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:17:03.703 21:19:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:17:03.703 #define SPDK_CONFIG_H 00:17:03.703 #define SPDK_CONFIG_APPS 1 00:17:03.703 #define SPDK_CONFIG_ARCH native 00:17:03.703 #undef SPDK_CONFIG_ASAN 00:17:03.703 #define SPDK_CONFIG_AVAHI 1 00:17:03.703 #undef SPDK_CONFIG_CET 00:17:03.703 #define SPDK_CONFIG_COVERAGE 1 00:17:03.703 #define SPDK_CONFIG_CROSS_PREFIX 00:17:03.703 #undef SPDK_CONFIG_CRYPTO 00:17:03.703 #undef SPDK_CONFIG_CRYPTO_MLX5 00:17:03.703 #undef SPDK_CONFIG_CUSTOMOCF 00:17:03.703 #undef SPDK_CONFIG_DAOS 00:17:03.703 #define SPDK_CONFIG_DAOS_DIR 00:17:03.703 #define SPDK_CONFIG_DEBUG 1 00:17:03.703 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:17:03.703 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:17:03.703 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:17:03.703 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:17:03.703 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:17:03.703 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:03.703 #define SPDK_CONFIG_EXAMPLES 1 00:17:03.703 #undef SPDK_CONFIG_FC 00:17:03.703 #define SPDK_CONFIG_FC_PATH 00:17:03.703 #define SPDK_CONFIG_FIO_PLUGIN 1 00:17:03.703 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:17:03.703 #undef SPDK_CONFIG_FUSE 00:17:03.703 #undef SPDK_CONFIG_FUZZER 00:17:03.703 #define SPDK_CONFIG_FUZZER_LIB 00:17:03.703 #define SPDK_CONFIG_GOLANG 1 00:17:03.703 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:17:03.703 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:17:03.703 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:17:03.703 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:17:03.703 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:17:03.703 #undef SPDK_CONFIG_HAVE_LIBBSD 00:17:03.703 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:17:03.703 #define SPDK_CONFIG_IDXD 1 00:17:03.703 #undef SPDK_CONFIG_IDXD_KERNEL 00:17:03.703 #undef SPDK_CONFIG_IPSEC_MB 00:17:03.703 #define SPDK_CONFIG_IPSEC_MB_DIR 00:17:03.703 #define SPDK_CONFIG_ISAL 1 00:17:03.703 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:17:03.703 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:17:03.703 #define SPDK_CONFIG_LIBDIR 00:17:03.703 #undef SPDK_CONFIG_LTO 00:17:03.703 #define SPDK_CONFIG_MAX_LCORES 00:17:03.703 #define SPDK_CONFIG_NVME_CUSE 1 00:17:03.703 #undef SPDK_CONFIG_OCF 00:17:03.703 #define SPDK_CONFIG_OCF_PATH 00:17:03.703 #define SPDK_CONFIG_OPENSSL_PATH 00:17:03.703 #undef SPDK_CONFIG_PGO_CAPTURE 00:17:03.703 #define SPDK_CONFIG_PGO_DIR 00:17:03.703 #undef SPDK_CONFIG_PGO_USE 00:17:03.703 #define SPDK_CONFIG_PREFIX /usr/local 00:17:03.703 #undef SPDK_CONFIG_RAID5F 00:17:03.703 #undef SPDK_CONFIG_RBD 00:17:03.703 #define SPDK_CONFIG_RDMA 1 00:17:03.703 #define SPDK_CONFIG_RDMA_PROV verbs 00:17:03.703 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:17:03.703 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:17:03.703 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:17:03.703 #define SPDK_CONFIG_SHARED 1 00:17:03.703 #undef SPDK_CONFIG_SMA 00:17:03.703 #define SPDK_CONFIG_TESTS 1 00:17:03.703 #undef SPDK_CONFIG_TSAN 00:17:03.703 #define SPDK_CONFIG_UBLK 1 00:17:03.703 #define SPDK_CONFIG_UBSAN 1 00:17:03.703 #undef SPDK_CONFIG_UNIT_TESTS 00:17:03.703 #undef SPDK_CONFIG_URING 00:17:03.703 #define SPDK_CONFIG_URING_PATH 00:17:03.703 #undef SPDK_CONFIG_URING_ZNS 00:17:03.703 #define SPDK_CONFIG_USDT 1 00:17:03.703 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:17:03.703 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:17:03.703 #undef SPDK_CONFIG_VFIO_USER 00:17:03.703 #define SPDK_CONFIG_VFIO_USER_DIR 00:17:03.703 #define SPDK_CONFIG_VHOST 1 00:17:03.703 #define SPDK_CONFIG_VIRTIO 1 00:17:03.703 #undef SPDK_CONFIG_VTUNE 00:17:03.703 #define SPDK_CONFIG_VTUNE_DIR 00:17:03.703 #define SPDK_CONFIG_WERROR 1 00:17:03.703 #define SPDK_CONFIG_WPDK_DIR 00:17:03.703 #undef SPDK_CONFIG_XNVME 00:17:03.703 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:17:03.703 21:19:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:17:03.703 21:19:51 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.703 21:19:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.703 21:19:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.703 21:19:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.703 21:19:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.703 21:19:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.703 21:19:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.703 21:19:51 -- paths/export.sh@5 -- # export PATH 00:17:03.703 21:19:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.704 21:19:51 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:03.704 21:19:51 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:03.704 21:19:51 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:03.704 21:19:51 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:03.704 21:19:51 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:17:03.704 21:19:51 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:17:03.704 21:19:51 -- pm/common@67 -- # TEST_TAG=N/A 00:17:03.704 21:19:51 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:17:03.704 21:19:51 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:17:03.704 21:19:51 -- pm/common@71 -- # uname -s 00:17:03.704 21:19:51 -- pm/common@71 -- # PM_OS=Linux 00:17:03.704 21:19:51 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:17:03.704 21:19:51 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:17:03.704 21:19:51 -- pm/common@76 -- # [[ Linux == Linux ]] 00:17:03.704 21:19:51 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:17:03.704 21:19:51 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:17:03.704 21:19:51 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:17:03.704 21:19:51 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:17:03.704 21:19:51 -- common/autotest_common.sh@57 -- # : 1 00:17:03.704 21:19:51 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:17:03.704 21:19:51 -- common/autotest_common.sh@61 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:17:03.704 21:19:51 -- common/autotest_common.sh@63 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:17:03.704 21:19:51 -- common/autotest_common.sh@65 -- # : 1 00:17:03.704 21:19:51 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:17:03.704 21:19:51 -- common/autotest_common.sh@67 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:17:03.704 21:19:51 -- common/autotest_common.sh@69 -- # : 00:17:03.704 21:19:51 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:17:03.704 21:19:51 -- common/autotest_common.sh@71 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:17:03.704 21:19:51 -- common/autotest_common.sh@73 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:17:03.704 21:19:51 -- common/autotest_common.sh@75 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:17:03.704 21:19:51 -- common/autotest_common.sh@77 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:17:03.704 21:19:51 -- common/autotest_common.sh@79 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:17:03.704 21:19:51 -- common/autotest_common.sh@81 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:17:03.704 21:19:51 -- common/autotest_common.sh@83 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:17:03.704 21:19:51 -- common/autotest_common.sh@85 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:17:03.704 21:19:51 -- common/autotest_common.sh@87 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:17:03.704 21:19:51 -- common/autotest_common.sh@89 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:17:03.704 21:19:51 -- common/autotest_common.sh@91 -- # : 1 00:17:03.704 21:19:51 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:17:03.704 21:19:51 -- common/autotest_common.sh@93 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:17:03.704 21:19:51 -- common/autotest_common.sh@95 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:17:03.704 21:19:51 -- common/autotest_common.sh@97 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:17:03.704 21:19:51 -- common/autotest_common.sh@99 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:17:03.704 21:19:51 -- common/autotest_common.sh@101 -- # : tcp 00:17:03.704 21:19:51 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:17:03.704 21:19:51 -- common/autotest_common.sh@103 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:17:03.704 21:19:51 -- common/autotest_common.sh@105 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:17:03.704 21:19:51 -- common/autotest_common.sh@107 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:17:03.704 21:19:51 -- common/autotest_common.sh@109 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:17:03.704 21:19:51 -- common/autotest_common.sh@111 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:17:03.704 21:19:51 -- common/autotest_common.sh@113 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:17:03.704 21:19:51 -- common/autotest_common.sh@115 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:17:03.704 21:19:51 -- common/autotest_common.sh@117 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:17:03.704 21:19:51 -- common/autotest_common.sh@119 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:17:03.704 21:19:51 -- common/autotest_common.sh@121 -- # : 1 00:17:03.704 21:19:51 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:17:03.704 21:19:51 -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:17:03.704 21:19:51 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:17:03.704 21:19:51 -- common/autotest_common.sh@125 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:17:03.704 21:19:51 -- common/autotest_common.sh@127 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:17:03.704 21:19:51 -- common/autotest_common.sh@129 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:17:03.704 21:19:51 -- common/autotest_common.sh@131 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:17:03.704 21:19:51 -- common/autotest_common.sh@133 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:17:03.704 21:19:51 -- common/autotest_common.sh@135 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:17:03.704 21:19:51 -- common/autotest_common.sh@137 -- # : v23.11 00:17:03.704 21:19:51 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:17:03.704 21:19:51 -- common/autotest_common.sh@139 -- # : true 00:17:03.704 21:19:51 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:17:03.704 21:19:51 -- common/autotest_common.sh@141 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:17:03.704 21:19:51 -- common/autotest_common.sh@143 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:17:03.704 21:19:51 -- common/autotest_common.sh@145 -- # : 1 00:17:03.704 21:19:51 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:17:03.704 21:19:51 -- common/autotest_common.sh@147 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:17:03.704 21:19:51 -- common/autotest_common.sh@149 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:17:03.704 21:19:51 -- common/autotest_common.sh@151 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:17:03.704 21:19:51 -- common/autotest_common.sh@153 -- # : 00:17:03.704 21:19:51 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:17:03.704 21:19:51 -- common/autotest_common.sh@155 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:17:03.704 21:19:51 -- common/autotest_common.sh@157 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:17:03.704 21:19:51 -- common/autotest_common.sh@159 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:17:03.704 21:19:51 -- common/autotest_common.sh@161 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:17:03.704 21:19:51 -- common/autotest_common.sh@163 -- # : 0 00:17:03.704 21:19:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:17:03.704 21:19:51 -- common/autotest_common.sh@166 -- # : 00:17:03.704 21:19:51 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:17:03.704 21:19:51 -- common/autotest_common.sh@168 -- # : 1 00:17:03.704 21:19:51 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:17:03.704 21:19:51 -- common/autotest_common.sh@170 -- # : 1 00:17:03.704 21:19:51 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:17:03.704 21:19:51 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:03.704 21:19:51 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:03.704 21:19:51 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:17:03.704 21:19:51 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:17:03.704 21:19:51 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:03.704 21:19:51 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:03.704 21:19:51 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:03.704 21:19:51 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:03.704 21:19:51 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:17:03.705 21:19:51 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:17:03.705 21:19:51 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:03.705 21:19:51 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:03.705 21:19:51 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:17:03.705 21:19:51 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:17:03.705 21:19:51 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:03.705 21:19:51 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:03.705 21:19:51 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:03.705 21:19:51 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:03.705 21:19:51 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:17:03.705 21:19:51 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:17:03.705 21:19:51 -- common/autotest_common.sh@199 -- # cat 00:17:03.705 21:19:51 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:17:03.705 21:19:51 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:03.705 21:19:51 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:03.705 21:19:51 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:03.705 21:19:51 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:03.705 21:19:51 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:17:03.705 21:19:51 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:17:03.705 21:19:51 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:03.705 21:19:51 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:03.705 21:19:51 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:03.705 21:19:51 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:03.705 21:19:51 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:03.705 21:19:51 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:03.705 21:19:51 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:03.705 21:19:51 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:03.705 21:19:51 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:03.705 21:19:51 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:03.705 21:19:51 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:03.705 21:19:51 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:03.705 21:19:51 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:17:03.705 21:19:51 -- common/autotest_common.sh@252 -- # export valgrind= 00:17:03.705 21:19:51 -- common/autotest_common.sh@252 -- # valgrind= 00:17:03.705 21:19:51 -- common/autotest_common.sh@258 -- # uname -s 00:17:03.705 21:19:51 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:17:03.705 21:19:51 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:17:03.705 21:19:51 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:17:03.705 21:19:51 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:17:03.705 21:19:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:17:03.705 21:19:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:17:03.705 21:19:51 -- common/autotest_common.sh@268 -- # MAKE=make 00:17:03.705 21:19:51 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:17:03.705 21:19:51 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:17:03.705 21:19:51 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:17:03.705 21:19:51 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:17:03.705 21:19:51 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:17:03.705 21:19:51 -- common/autotest_common.sh@289 -- # for i in "$@" 00:17:03.705 21:19:51 -- common/autotest_common.sh@290 -- # case "$i" in 00:17:03.705 21:19:51 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:17:03.705 21:19:51 -- common/autotest_common.sh@307 -- # [[ -z 78064 ]] 00:17:03.705 21:19:51 -- common/autotest_common.sh@307 -- # kill -0 78064 00:17:03.705 21:19:51 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:17:03.705 21:19:51 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:17:03.705 21:19:51 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:17:03.705 21:19:51 -- common/autotest_common.sh@320 -- # local mount target_dir 00:17:03.705 21:19:51 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:17:03.705 21:19:51 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:17:03.705 21:19:51 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:17:03.705 21:19:51 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:17:03.705 21:19:51 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.CP2v13 00:17:03.705 21:19:51 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:17:03.705 21:19:51 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:17:03.705 21:19:51 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:17:03.705 21:19:51 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.CP2v13/tests/target /tmp/spdk.CP2v13 00:17:03.705 21:19:51 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:17:03.705 21:19:51 -- common/autotest_common.sh@316 -- # df -T 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=6266613760 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=13074923520 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=5965070336 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=13074923520 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=5965070336 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267748352 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=143360 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:17:03.705 21:19:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=91723550720 00:17:03.705 21:19:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:17:03.705 21:19:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=7979229184 00:17:03.705 21:19:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:03.706 21:19:51 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:17:03.706 * Looking for test storage... 00:17:03.706 21:19:51 -- common/autotest_common.sh@357 -- # local target_space new_size 00:17:03.706 21:19:51 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:17:03.706 21:19:51 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.706 21:19:51 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:17:03.706 21:19:51 -- common/autotest_common.sh@361 -- # mount=/home 00:17:03.706 21:19:51 -- common/autotest_common.sh@363 -- # target_space=13074923520 00:17:03.706 21:19:51 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:17:03.706 21:19:51 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:17:03.706 21:19:51 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:17:03.706 21:19:51 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:17:03.706 21:19:51 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:17:03.706 21:19:51 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.706 21:19:51 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.706 21:19:51 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.706 21:19:51 -- common/autotest_common.sh@378 -- # return 0 00:17:03.706 21:19:51 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:17:03.706 21:19:51 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:17:03.706 21:19:51 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:17:03.706 21:19:51 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:17:03.706 21:19:51 -- common/autotest_common.sh@1673 -- # true 00:17:03.706 21:19:51 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:17:03.706 21:19:51 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:17:03.706 21:19:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:17:03.706 21:19:51 -- common/autotest_common.sh@27 -- # exec 00:17:03.706 21:19:51 -- common/autotest_common.sh@29 -- # exec 00:17:03.706 21:19:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:17:03.706 21:19:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:17:03.706 21:19:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:17:03.706 21:19:51 -- common/autotest_common.sh@18 -- # set -x 00:17:03.706 21:19:51 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:03.706 21:19:51 -- nvmf/common.sh@7 -- # uname -s 00:17:03.706 21:19:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.706 21:19:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.706 21:19:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.706 21:19:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.706 21:19:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.706 21:19:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.706 21:19:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.706 21:19:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.706 21:19:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.706 21:19:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.706 21:19:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:17:03.706 21:19:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:17:03.706 21:19:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.706 21:19:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.706 21:19:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:03.706 21:19:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.706 21:19:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.706 21:19:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.706 21:19:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.706 21:19:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.706 21:19:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.706 21:19:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.706 21:19:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.706 21:19:51 -- paths/export.sh@5 -- # export PATH 00:17:03.706 21:19:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.706 21:19:51 -- nvmf/common.sh@47 -- # : 0 00:17:03.706 21:19:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.706 21:19:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.706 21:19:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.706 21:19:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.706 21:19:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.706 21:19:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.706 21:19:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.706 21:19:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.706 21:19:51 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:17:03.706 21:19:51 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:03.706 21:19:51 -- target/filesystem.sh@15 -- # nvmftestinit 00:17:03.706 21:19:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:03.706 21:19:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.706 21:19:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:03.706 21:19:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:03.706 21:19:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:03.706 21:19:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.706 21:19:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.706 21:19:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.706 21:19:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:03.706 21:19:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:03.706 21:19:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:03.706 21:19:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:03.706 21:19:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:03.706 21:19:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:03.706 21:19:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.706 21:19:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.706 21:19:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:03.706 21:19:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:03.706 21:19:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:03.706 21:19:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:03.706 21:19:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:03.706 21:19:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.706 21:19:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:03.706 21:19:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:03.706 21:19:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:03.706 21:19:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:03.706 21:19:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:03.707 21:19:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:03.707 Cannot find device "nvmf_tgt_br" 00:17:03.707 21:19:51 -- nvmf/common.sh@155 -- # true 00:17:03.707 21:19:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.707 Cannot find device "nvmf_tgt_br2" 00:17:03.707 21:19:51 -- nvmf/common.sh@156 -- # true 00:17:03.707 21:19:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:03.707 21:19:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:03.707 Cannot find device "nvmf_tgt_br" 00:17:03.707 21:19:51 -- nvmf/common.sh@158 -- # true 00:17:03.707 21:19:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:03.707 Cannot find device "nvmf_tgt_br2" 00:17:03.707 21:19:51 -- nvmf/common.sh@159 -- # true 00:17:03.707 21:19:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:03.707 21:19:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:03.707 21:19:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.707 21:19:51 -- nvmf/common.sh@162 -- # true 00:17:03.707 21:19:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.707 21:19:51 -- nvmf/common.sh@163 -- # true 00:17:03.707 21:19:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:03.707 21:19:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:03.707 21:19:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:03.707 21:19:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:03.707 21:19:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:03.707 21:19:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.707 21:19:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.707 21:19:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.707 21:19:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.707 21:19:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:03.707 21:19:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:03.707 21:19:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:03.707 21:19:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:03.707 21:19:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.707 21:19:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.707 21:19:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.707 21:19:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:03.707 21:19:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:03.707 21:19:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.707 21:19:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.707 21:19:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.707 21:19:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.707 21:19:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.707 21:19:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:03.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:03.707 00:17:03.707 --- 10.0.0.2 ping statistics --- 00:17:03.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.707 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:03.707 21:19:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:03.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:03.707 00:17:03.707 --- 10.0.0.3 ping statistics --- 00:17:03.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.707 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:03.707 21:19:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:17:03.707 00:17:03.707 --- 10.0.0.1 ping statistics --- 00:17:03.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.707 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:03.707 21:19:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.707 21:19:52 -- nvmf/common.sh@422 -- # return 0 00:17:03.707 21:19:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:03.707 21:19:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.707 21:19:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:03.707 21:19:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:03.707 21:19:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.707 21:19:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:03.707 21:19:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:03.707 21:19:52 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:17:03.707 21:19:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:03.707 21:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.707 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:03.707 ************************************ 00:17:03.707 START TEST nvmf_filesystem_no_in_capsule 00:17:03.707 ************************************ 00:17:03.707 21:19:52 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:17:03.707 21:19:52 -- target/filesystem.sh@47 -- # in_capsule=0 00:17:03.707 21:19:52 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:03.707 21:19:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:03.707 21:19:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:03.707 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:03.707 21:19:52 -- nvmf/common.sh@470 -- # nvmfpid=78229 00:17:03.707 21:19:52 -- nvmf/common.sh@471 -- # waitforlisten 78229 00:17:03.707 21:19:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.707 21:19:52 -- common/autotest_common.sh@817 -- # '[' -z 78229 ']' 00:17:03.707 21:19:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.707 21:19:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:03.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.707 21:19:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.707 21:19:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:03.707 21:19:52 -- common/autotest_common.sh@10 -- # set +x 00:17:03.707 [2024-04-26 21:19:52.295742] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:03.707 [2024-04-26 21:19:52.295822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.707 [2024-04-26 21:19:52.438933] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.707 [2024-04-26 21:19:52.495258] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.707 [2024-04-26 21:19:52.495312] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.707 [2024-04-26 21:19:52.495320] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.707 [2024-04-26 21:19:52.495325] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.707 [2024-04-26 21:19:52.495340] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.707 [2024-04-26 21:19:52.495475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.707 [2024-04-26 21:19:52.495750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.707 [2024-04-26 21:19:52.495666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.707 [2024-04-26 21:19:52.495752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.299 21:19:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:04.299 21:19:53 -- common/autotest_common.sh@850 -- # return 0 00:17:04.299 21:19:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:04.299 21:19:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:04.299 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:17:04.299 21:19:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.299 21:19:53 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:04.299 21:19:53 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:04.299 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.299 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:17:04.299 [2024-04-26 21:19:53.316671] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.299 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.299 21:19:53 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:04.299 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.299 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:17:04.299 Malloc1 00:17:04.299 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.299 21:19:53 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:04.299 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.299 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:17:04.299 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.299 21:19:53 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:04.299 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.299 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:17:04.299 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.299 21:19:53 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.299 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.299 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:17:04.299 [2024-04-26 21:19:53.490681] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.299 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.299 21:19:53 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:04.299 21:19:53 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:17:04.299 21:19:53 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:04.299 21:19:53 -- common/autotest_common.sh@1366 -- # local bs 00:17:04.299 21:19:53 -- common/autotest_common.sh@1367 -- # local nb 00:17:04.299 21:19:53 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:04.299 21:19:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.299 21:19:53 -- common/autotest_common.sh@10 -- # set +x 00:17:04.299 21:19:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.299 21:19:53 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:04.299 { 00:17:04.299 "aliases": [ 00:17:04.299 "4158bfec-def8-43f2-8e9c-cddafb2bc3bc" 00:17:04.299 ], 00:17:04.299 "assigned_rate_limits": { 00:17:04.299 "r_mbytes_per_sec": 0, 00:17:04.299 "rw_ios_per_sec": 0, 00:17:04.299 "rw_mbytes_per_sec": 0, 00:17:04.299 "w_mbytes_per_sec": 0 00:17:04.299 }, 00:17:04.299 "block_size": 512, 00:17:04.299 "claim_type": "exclusive_write", 00:17:04.299 "claimed": true, 00:17:04.299 "driver_specific": {}, 00:17:04.299 "memory_domains": [ 00:17:04.299 { 00:17:04.299 "dma_device_id": "system", 00:17:04.299 "dma_device_type": 1 00:17:04.299 }, 00:17:04.299 { 00:17:04.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.299 "dma_device_type": 2 00:17:04.299 } 00:17:04.299 ], 00:17:04.299 "name": "Malloc1", 00:17:04.299 "num_blocks": 1048576, 00:17:04.299 "product_name": "Malloc disk", 00:17:04.299 "supported_io_types": { 00:17:04.299 "abort": true, 00:17:04.299 "compare": false, 00:17:04.299 "compare_and_write": false, 00:17:04.299 "flush": true, 00:17:04.299 "nvme_admin": false, 00:17:04.299 "nvme_io": false, 00:17:04.299 "read": true, 00:17:04.299 "reset": true, 00:17:04.299 "unmap": true, 00:17:04.299 "write": true, 00:17:04.299 "write_zeroes": true 00:17:04.299 }, 00:17:04.299 "uuid": "4158bfec-def8-43f2-8e9c-cddafb2bc3bc", 00:17:04.299 "zoned": false 00:17:04.299 } 00:17:04.299 ]' 00:17:04.299 21:19:53 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:04.558 21:19:53 -- common/autotest_common.sh@1369 -- # bs=512 00:17:04.558 21:19:53 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:04.558 21:19:53 -- common/autotest_common.sh@1370 -- # nb=1048576 00:17:04.558 21:19:53 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:17:04.558 21:19:53 -- common/autotest_common.sh@1374 -- # echo 512 00:17:04.558 21:19:53 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:04.558 21:19:53 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:04.558 21:19:53 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:04.558 21:19:53 -- common/autotest_common.sh@1184 -- # local i=0 00:17:04.558 21:19:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:04.558 21:19:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:04.558 21:19:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:07.090 21:19:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:07.090 21:19:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.090 21:19:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:07.090 21:19:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:07.090 21:19:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.090 21:19:55 -- common/autotest_common.sh@1194 -- # return 0 00:17:07.090 21:19:55 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:07.090 21:19:55 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:07.090 21:19:55 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:07.090 21:19:55 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:07.090 21:19:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:07.090 21:19:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:07.090 21:19:55 -- setup/common.sh@80 -- # echo 536870912 00:17:07.090 21:19:55 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:07.090 21:19:55 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:07.090 21:19:55 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:07.090 21:19:55 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:07.090 21:19:55 -- target/filesystem.sh@69 -- # partprobe 00:17:07.090 21:19:55 -- target/filesystem.sh@70 -- # sleep 1 00:17:08.027 21:19:56 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:17:08.027 21:19:56 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:08.027 21:19:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:08.027 21:19:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.027 21:19:56 -- common/autotest_common.sh@10 -- # set +x 00:17:08.027 ************************************ 00:17:08.027 START TEST filesystem_ext4 00:17:08.027 ************************************ 00:17:08.027 21:19:56 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:08.027 21:19:56 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:08.027 21:19:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:08.027 21:19:56 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:08.027 21:19:56 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:17:08.027 21:19:56 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:08.027 21:19:56 -- common/autotest_common.sh@914 -- # local i=0 00:17:08.027 21:19:56 -- common/autotest_common.sh@915 -- # local force 00:17:08.027 21:19:56 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:17:08.028 21:19:56 -- common/autotest_common.sh@918 -- # force=-F 00:17:08.028 21:19:56 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:08.028 mke2fs 1.46.5 (30-Dec-2021) 00:17:08.028 Discarding device blocks: 0/522240 done 00:17:08.028 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:08.028 Filesystem UUID: 52e42bcf-a3b1-4b79-9cdb-57f010696ea1 00:17:08.028 Superblock backups stored on blocks: 00:17:08.028 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:08.028 00:17:08.028 Allocating group tables: 0/64 done 00:17:08.028 Writing inode tables: 0/64 done 00:17:08.028 Creating journal (8192 blocks): done 00:17:08.028 Writing superblocks and filesystem accounting information: 0/64 done 00:17:08.028 00:17:08.028 21:19:57 -- common/autotest_common.sh@931 -- # return 0 00:17:08.028 21:19:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:08.028 21:19:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:08.028 21:19:57 -- target/filesystem.sh@25 -- # sync 00:17:08.286 21:19:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:08.286 21:19:57 -- target/filesystem.sh@27 -- # sync 00:17:08.286 21:19:57 -- target/filesystem.sh@29 -- # i=0 00:17:08.286 21:19:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:08.286 21:19:57 -- target/filesystem.sh@37 -- # kill -0 78229 00:17:08.286 21:19:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:08.286 21:19:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:08.286 21:19:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:08.286 21:19:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:08.286 00:17:08.286 real 0m0.335s 00:17:08.286 user 0m0.021s 00:17:08.286 sys 0m0.070s 00:17:08.286 21:19:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.286 21:19:57 -- common/autotest_common.sh@10 -- # set +x 00:17:08.286 ************************************ 00:17:08.286 END TEST filesystem_ext4 00:17:08.286 ************************************ 00:17:08.286 21:19:57 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:08.286 21:19:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:08.286 21:19:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.286 21:19:57 -- common/autotest_common.sh@10 -- # set +x 00:17:08.286 ************************************ 00:17:08.286 START TEST filesystem_btrfs 00:17:08.286 ************************************ 00:17:08.286 21:19:57 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:08.286 21:19:57 -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:08.286 21:19:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:08.286 21:19:57 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:08.286 21:19:57 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:17:08.286 21:19:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:08.286 21:19:57 -- common/autotest_common.sh@914 -- # local i=0 00:17:08.286 21:19:57 -- common/autotest_common.sh@915 -- # local force 00:17:08.286 21:19:57 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:17:08.286 21:19:57 -- common/autotest_common.sh@920 -- # force=-f 00:17:08.286 21:19:57 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:08.544 btrfs-progs v6.6.2 00:17:08.544 See https://btrfs.readthedocs.io for more information. 00:17:08.544 00:17:08.544 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:08.544 NOTE: several default settings have changed in version 5.15, please make sure 00:17:08.544 this does not affect your deployments: 00:17:08.544 - DUP for metadata (-m dup) 00:17:08.544 - enabled no-holes (-O no-holes) 00:17:08.544 - enabled free-space-tree (-R free-space-tree) 00:17:08.544 00:17:08.544 Label: (null) 00:17:08.544 UUID: 78698e3b-2e5b-4c97-8224-623570a36c43 00:17:08.544 Node size: 16384 00:17:08.544 Sector size: 4096 00:17:08.544 Filesystem size: 510.00MiB 00:17:08.544 Block group profiles: 00:17:08.544 Data: single 8.00MiB 00:17:08.544 Metadata: DUP 32.00MiB 00:17:08.544 System: DUP 8.00MiB 00:17:08.544 SSD detected: yes 00:17:08.544 Zoned device: no 00:17:08.544 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:17:08.544 Runtime features: free-space-tree 00:17:08.544 Checksum: crc32c 00:17:08.544 Number of devices: 1 00:17:08.544 Devices: 00:17:08.544 ID SIZE PATH 00:17:08.544 1 510.00MiB /dev/nvme0n1p1 00:17:08.544 00:17:08.544 21:19:57 -- common/autotest_common.sh@931 -- # return 0 00:17:08.544 21:19:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:08.544 21:19:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:08.544 21:19:57 -- target/filesystem.sh@25 -- # sync 00:17:08.544 21:19:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:08.544 21:19:57 -- target/filesystem.sh@27 -- # sync 00:17:08.544 21:19:57 -- target/filesystem.sh@29 -- # i=0 00:17:08.544 21:19:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:08.544 21:19:57 -- target/filesystem.sh@37 -- # kill -0 78229 00:17:08.544 21:19:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:08.544 21:19:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:08.544 21:19:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:08.544 21:19:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:08.544 00:17:08.544 real 0m0.308s 00:17:08.544 user 0m0.018s 00:17:08.544 sys 0m0.079s 00:17:08.544 21:19:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.544 21:19:57 -- common/autotest_common.sh@10 -- # set +x 00:17:08.544 ************************************ 00:17:08.544 END TEST filesystem_btrfs 00:17:08.544 ************************************ 00:17:08.544 21:19:57 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:17:08.544 21:19:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:08.544 21:19:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.544 21:19:57 -- common/autotest_common.sh@10 -- # set +x 00:17:08.865 ************************************ 00:17:08.865 START TEST filesystem_xfs 00:17:08.865 ************************************ 00:17:08.865 21:19:57 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:17:08.865 21:19:57 -- target/filesystem.sh@18 -- # fstype=xfs 00:17:08.865 21:19:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:08.865 21:19:57 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:08.865 21:19:57 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:17:08.865 21:19:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:08.865 21:19:57 -- common/autotest_common.sh@914 -- # local i=0 00:17:08.865 21:19:57 -- common/autotest_common.sh@915 -- # local force 00:17:08.865 21:19:57 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:17:08.865 21:19:57 -- common/autotest_common.sh@920 -- # force=-f 00:17:08.865 21:19:57 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:08.865 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:08.865 = sectsz=512 attr=2, projid32bit=1 00:17:08.865 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:08.865 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:08.865 data = bsize=4096 blocks=130560, imaxpct=25 00:17:08.865 = sunit=0 swidth=0 blks 00:17:08.865 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:08.865 log =internal log bsize=4096 blocks=16384, version=2 00:17:08.865 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:08.865 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:09.457 Discarding blocks...Done. 00:17:09.457 21:19:58 -- common/autotest_common.sh@931 -- # return 0 00:17:09.457 21:19:58 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:11.989 21:20:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:11.989 21:20:00 -- target/filesystem.sh@25 -- # sync 00:17:11.989 21:20:00 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:11.989 21:20:00 -- target/filesystem.sh@27 -- # sync 00:17:11.989 21:20:00 -- target/filesystem.sh@29 -- # i=0 00:17:11.989 21:20:00 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:11.989 21:20:00 -- target/filesystem.sh@37 -- # kill -0 78229 00:17:11.989 21:20:00 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:11.989 21:20:00 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:11.989 21:20:00 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:11.989 21:20:00 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:11.989 ************************************ 00:17:11.989 END TEST filesystem_xfs 00:17:11.989 ************************************ 00:17:11.989 00:17:11.989 real 0m3.072s 00:17:11.989 user 0m0.020s 00:17:11.989 sys 0m0.057s 00:17:11.989 21:20:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:11.989 21:20:00 -- common/autotest_common.sh@10 -- # set +x 00:17:11.989 21:20:00 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:11.989 21:20:00 -- target/filesystem.sh@93 -- # sync 00:17:11.989 21:20:00 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.989 21:20:01 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:11.989 21:20:01 -- common/autotest_common.sh@1205 -- # local i=0 00:17:11.989 21:20:01 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:11.989 21:20:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.989 21:20:01 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:11.989 21:20:01 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.989 21:20:01 -- common/autotest_common.sh@1217 -- # return 0 00:17:11.989 21:20:01 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.989 21:20:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.989 21:20:01 -- common/autotest_common.sh@10 -- # set +x 00:17:11.989 21:20:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.989 21:20:01 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:11.989 21:20:01 -- target/filesystem.sh@101 -- # killprocess 78229 00:17:11.989 21:20:01 -- common/autotest_common.sh@936 -- # '[' -z 78229 ']' 00:17:11.989 21:20:01 -- common/autotest_common.sh@940 -- # kill -0 78229 00:17:11.989 21:20:01 -- common/autotest_common.sh@941 -- # uname 00:17:11.989 21:20:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.989 21:20:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78229 00:17:11.989 21:20:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:11.989 21:20:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:11.989 killing process with pid 78229 00:17:11.989 21:20:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78229' 00:17:11.989 21:20:01 -- common/autotest_common.sh@955 -- # kill 78229 00:17:11.989 21:20:01 -- common/autotest_common.sh@960 -- # wait 78229 00:17:12.248 21:20:01 -- target/filesystem.sh@102 -- # nvmfpid= 00:17:12.248 00:17:12.248 real 0m9.202s 00:17:12.248 user 0m35.535s 00:17:12.248 sys 0m1.319s 00:17:12.248 21:20:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:12.248 21:20:01 -- common/autotest_common.sh@10 -- # set +x 00:17:12.248 ************************************ 00:17:12.248 END TEST nvmf_filesystem_no_in_capsule 00:17:12.248 ************************************ 00:17:12.248 21:20:01 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:17:12.248 21:20:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:12.248 21:20:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:12.248 21:20:01 -- common/autotest_common.sh@10 -- # set +x 00:17:12.507 ************************************ 00:17:12.507 START TEST nvmf_filesystem_in_capsule 00:17:12.507 ************************************ 00:17:12.507 21:20:01 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:17:12.507 21:20:01 -- target/filesystem.sh@47 -- # in_capsule=4096 00:17:12.507 21:20:01 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:12.507 21:20:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:12.508 21:20:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:12.508 21:20:01 -- common/autotest_common.sh@10 -- # set +x 00:17:12.508 21:20:01 -- nvmf/common.sh@470 -- # nvmfpid=78558 00:17:12.508 21:20:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:12.508 21:20:01 -- nvmf/common.sh@471 -- # waitforlisten 78558 00:17:12.508 21:20:01 -- common/autotest_common.sh@817 -- # '[' -z 78558 ']' 00:17:12.508 21:20:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.508 21:20:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:12.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.508 21:20:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.508 21:20:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:12.508 21:20:01 -- common/autotest_common.sh@10 -- # set +x 00:17:12.508 [2024-04-26 21:20:01.648785] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:12.508 [2024-04-26 21:20:01.648857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.766 [2024-04-26 21:20:01.791696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.766 [2024-04-26 21:20:01.844200] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.766 [2024-04-26 21:20:01.844253] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.766 [2024-04-26 21:20:01.844260] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.766 [2024-04-26 21:20:01.844266] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.766 [2024-04-26 21:20:01.844271] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.766 [2024-04-26 21:20:01.844396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.766 [2024-04-26 21:20:01.844480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.766 [2024-04-26 21:20:01.844602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.766 [2024-04-26 21:20:01.844604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.332 21:20:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:13.332 21:20:02 -- common/autotest_common.sh@850 -- # return 0 00:17:13.332 21:20:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:13.332 21:20:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:13.332 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.592 21:20:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.592 21:20:02 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:13.592 21:20:02 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:17:13.592 21:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.592 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.592 [2024-04-26 21:20:02.619384] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.592 21:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.592 21:20:02 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:13.592 21:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.592 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.592 Malloc1 00:17:13.592 21:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.592 21:20:02 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:13.592 21:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.592 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.593 21:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.593 21:20:02 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:13.593 21:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.593 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.593 21:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.593 21:20:02 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.593 21:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.593 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.593 [2024-04-26 21:20:02.787060] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.593 21:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.593 21:20:02 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:13.593 21:20:02 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:17:13.593 21:20:02 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:13.593 21:20:02 -- common/autotest_common.sh@1366 -- # local bs 00:17:13.593 21:20:02 -- common/autotest_common.sh@1367 -- # local nb 00:17:13.593 21:20:02 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:13.593 21:20:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.593 21:20:02 -- common/autotest_common.sh@10 -- # set +x 00:17:13.593 21:20:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.593 21:20:02 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:13.593 { 00:17:13.593 "aliases": [ 00:17:13.593 "bae71153-91c1-4639-be81-05187c906ac8" 00:17:13.593 ], 00:17:13.593 "assigned_rate_limits": { 00:17:13.593 "r_mbytes_per_sec": 0, 00:17:13.593 "rw_ios_per_sec": 0, 00:17:13.593 "rw_mbytes_per_sec": 0, 00:17:13.593 "w_mbytes_per_sec": 0 00:17:13.593 }, 00:17:13.593 "block_size": 512, 00:17:13.593 "claim_type": "exclusive_write", 00:17:13.593 "claimed": true, 00:17:13.593 "driver_specific": {}, 00:17:13.593 "memory_domains": [ 00:17:13.593 { 00:17:13.593 "dma_device_id": "system", 00:17:13.593 "dma_device_type": 1 00:17:13.593 }, 00:17:13.593 { 00:17:13.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.593 "dma_device_type": 2 00:17:13.593 } 00:17:13.593 ], 00:17:13.593 "name": "Malloc1", 00:17:13.593 "num_blocks": 1048576, 00:17:13.593 "product_name": "Malloc disk", 00:17:13.593 "supported_io_types": { 00:17:13.593 "abort": true, 00:17:13.593 "compare": false, 00:17:13.593 "compare_and_write": false, 00:17:13.593 "flush": true, 00:17:13.593 "nvme_admin": false, 00:17:13.593 "nvme_io": false, 00:17:13.593 "read": true, 00:17:13.593 "reset": true, 00:17:13.593 "unmap": true, 00:17:13.593 "write": true, 00:17:13.593 "write_zeroes": true 00:17:13.593 }, 00:17:13.593 "uuid": "bae71153-91c1-4639-be81-05187c906ac8", 00:17:13.593 "zoned": false 00:17:13.593 } 00:17:13.593 ]' 00:17:13.593 21:20:02 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:13.851 21:20:02 -- common/autotest_common.sh@1369 -- # bs=512 00:17:13.851 21:20:02 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:13.851 21:20:02 -- common/autotest_common.sh@1370 -- # nb=1048576 00:17:13.851 21:20:02 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:17:13.851 21:20:02 -- common/autotest_common.sh@1374 -- # echo 512 00:17:13.851 21:20:02 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:13.851 21:20:02 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.851 21:20:03 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.851 21:20:03 -- common/autotest_common.sh@1184 -- # local i=0 00:17:13.851 21:20:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.851 21:20:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:13.851 21:20:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:16.382 21:20:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:16.382 21:20:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:16.382 21:20:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.382 21:20:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:16.382 21:20:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.382 21:20:05 -- common/autotest_common.sh@1194 -- # return 0 00:17:16.382 21:20:05 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:16.382 21:20:05 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:16.382 21:20:05 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:16.382 21:20:05 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:16.382 21:20:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:16.382 21:20:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:16.382 21:20:05 -- setup/common.sh@80 -- # echo 536870912 00:17:16.382 21:20:05 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:16.382 21:20:05 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:16.382 21:20:05 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:16.382 21:20:05 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:16.382 21:20:05 -- target/filesystem.sh@69 -- # partprobe 00:17:16.382 21:20:05 -- target/filesystem.sh@70 -- # sleep 1 00:17:17.313 21:20:06 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:17:17.313 21:20:06 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:17.313 21:20:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:17.313 21:20:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.313 21:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:17.313 ************************************ 00:17:17.313 START TEST filesystem_in_capsule_ext4 00:17:17.313 ************************************ 00:17:17.313 21:20:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:17.313 21:20:06 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:17.313 21:20:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:17.313 21:20:06 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:17.313 21:20:06 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:17:17.313 21:20:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:17.313 21:20:06 -- common/autotest_common.sh@914 -- # local i=0 00:17:17.313 21:20:06 -- common/autotest_common.sh@915 -- # local force 00:17:17.313 21:20:06 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:17:17.313 21:20:06 -- common/autotest_common.sh@918 -- # force=-F 00:17:17.313 21:20:06 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:17.313 mke2fs 1.46.5 (30-Dec-2021) 00:17:17.313 Discarding device blocks: 0/522240 done 00:17:17.313 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:17.313 Filesystem UUID: 594c7249-c35d-473f-ad91-b1d86ac3bac1 00:17:17.313 Superblock backups stored on blocks: 00:17:17.313 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:17.313 00:17:17.313 Allocating group tables: 0/64 done 00:17:17.313 Writing inode tables: 0/64 done 00:17:17.313 Creating journal (8192 blocks): done 00:17:17.313 Writing superblocks and filesystem accounting information: 0/64 done 00:17:17.313 00:17:17.313 21:20:06 -- common/autotest_common.sh@931 -- # return 0 00:17:17.313 21:20:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:17.313 21:20:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:17.571 21:20:06 -- target/filesystem.sh@25 -- # sync 00:17:17.571 21:20:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:17.571 21:20:06 -- target/filesystem.sh@27 -- # sync 00:17:17.571 21:20:06 -- target/filesystem.sh@29 -- # i=0 00:17:17.571 21:20:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:17.571 21:20:06 -- target/filesystem.sh@37 -- # kill -0 78558 00:17:17.571 21:20:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:17.571 21:20:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:17.571 21:20:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:17.571 21:20:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:17.571 00:17:17.571 real 0m0.377s 00:17:17.571 user 0m0.020s 00:17:17.571 sys 0m0.058s 00:17:17.571 21:20:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:17.571 21:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:17.571 ************************************ 00:17:17.571 END TEST filesystem_in_capsule_ext4 00:17:17.571 ************************************ 00:17:17.571 21:20:06 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:17.571 21:20:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:17.571 21:20:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.571 21:20:06 -- common/autotest_common.sh@10 -- # set +x 00:17:17.880 ************************************ 00:17:17.880 START TEST filesystem_in_capsule_btrfs 00:17:17.880 ************************************ 00:17:17.880 21:20:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:17.880 21:20:06 -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:17.880 21:20:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:17.880 21:20:06 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:17.880 21:20:06 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:17:17.880 21:20:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:17.880 21:20:06 -- common/autotest_common.sh@914 -- # local i=0 00:17:17.880 21:20:06 -- common/autotest_common.sh@915 -- # local force 00:17:17.880 21:20:06 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:17:17.880 21:20:06 -- common/autotest_common.sh@920 -- # force=-f 00:17:17.880 21:20:06 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:17.880 btrfs-progs v6.6.2 00:17:17.880 See https://btrfs.readthedocs.io for more information. 00:17:17.880 00:17:17.880 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:17.880 NOTE: several default settings have changed in version 5.15, please make sure 00:17:17.880 this does not affect your deployments: 00:17:17.880 - DUP for metadata (-m dup) 00:17:17.880 - enabled no-holes (-O no-holes) 00:17:17.880 - enabled free-space-tree (-R free-space-tree) 00:17:17.880 00:17:17.880 Label: (null) 00:17:17.880 UUID: 659e2769-af61-4826-9cdf-032308f54d32 00:17:17.880 Node size: 16384 00:17:17.880 Sector size: 4096 00:17:17.880 Filesystem size: 510.00MiB 00:17:17.880 Block group profiles: 00:17:17.880 Data: single 8.00MiB 00:17:17.880 Metadata: DUP 32.00MiB 00:17:17.880 System: DUP 8.00MiB 00:17:17.880 SSD detected: yes 00:17:17.880 Zoned device: no 00:17:17.880 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:17:17.880 Runtime features: free-space-tree 00:17:17.880 Checksum: crc32c 00:17:17.880 Number of devices: 1 00:17:17.880 Devices: 00:17:17.880 ID SIZE PATH 00:17:17.880 1 510.00MiB /dev/nvme0n1p1 00:17:17.880 00:17:17.880 21:20:06 -- common/autotest_common.sh@931 -- # return 0 00:17:17.880 21:20:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:17.880 21:20:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:17.880 21:20:07 -- target/filesystem.sh@25 -- # sync 00:17:17.880 21:20:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:17.880 21:20:07 -- target/filesystem.sh@27 -- # sync 00:17:17.880 21:20:07 -- target/filesystem.sh@29 -- # i=0 00:17:17.880 21:20:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:17.880 21:20:07 -- target/filesystem.sh@37 -- # kill -0 78558 00:17:17.880 21:20:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:17.880 21:20:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:17.880 21:20:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:17.880 21:20:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:17.880 00:17:17.880 real 0m0.210s 00:17:17.880 user 0m0.030s 00:17:17.880 sys 0m0.073s 00:17:17.880 21:20:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:17.880 21:20:07 -- common/autotest_common.sh@10 -- # set +x 00:17:17.880 ************************************ 00:17:17.880 END TEST filesystem_in_capsule_btrfs 00:17:17.880 ************************************ 00:17:17.880 21:20:07 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:17:17.880 21:20:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:17.880 21:20:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.880 21:20:07 -- common/autotest_common.sh@10 -- # set +x 00:17:18.184 ************************************ 00:17:18.184 START TEST filesystem_in_capsule_xfs 00:17:18.184 ************************************ 00:17:18.184 21:20:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:17:18.184 21:20:07 -- target/filesystem.sh@18 -- # fstype=xfs 00:17:18.184 21:20:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:18.184 21:20:07 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:18.184 21:20:07 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:17:18.184 21:20:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:18.184 21:20:07 -- common/autotest_common.sh@914 -- # local i=0 00:17:18.184 21:20:07 -- common/autotest_common.sh@915 -- # local force 00:17:18.184 21:20:07 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:17:18.184 21:20:07 -- common/autotest_common.sh@920 -- # force=-f 00:17:18.184 21:20:07 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:18.184 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:18.184 = sectsz=512 attr=2, projid32bit=1 00:17:18.184 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:18.184 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:18.184 data = bsize=4096 blocks=130560, imaxpct=25 00:17:18.184 = sunit=0 swidth=0 blks 00:17:18.184 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:18.184 log =internal log bsize=4096 blocks=16384, version=2 00:17:18.184 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:18.184 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:18.750 Discarding blocks...Done. 00:17:18.750 21:20:07 -- common/autotest_common.sh@931 -- # return 0 00:17:18.750 21:20:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:20.651 21:20:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:20.651 21:20:09 -- target/filesystem.sh@25 -- # sync 00:17:20.651 21:20:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:20.651 21:20:09 -- target/filesystem.sh@27 -- # sync 00:17:20.651 21:20:09 -- target/filesystem.sh@29 -- # i=0 00:17:20.651 21:20:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:20.651 21:20:09 -- target/filesystem.sh@37 -- # kill -0 78558 00:17:20.651 21:20:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:20.651 21:20:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:20.651 21:20:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:20.651 21:20:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:20.651 00:17:20.651 real 0m2.600s 00:17:20.651 user 0m0.031s 00:17:20.651 sys 0m0.071s 00:17:20.651 21:20:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:20.651 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:17:20.651 ************************************ 00:17:20.651 END TEST filesystem_in_capsule_xfs 00:17:20.651 ************************************ 00:17:20.651 21:20:09 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:20.651 21:20:09 -- target/filesystem.sh@93 -- # sync 00:17:20.651 21:20:09 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.910 21:20:09 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.910 21:20:09 -- common/autotest_common.sh@1205 -- # local i=0 00:17:20.910 21:20:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:20.910 21:20:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.910 21:20:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:20.910 21:20:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.910 21:20:09 -- common/autotest_common.sh@1217 -- # return 0 00:17:20.910 21:20:09 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.910 21:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.910 21:20:09 -- common/autotest_common.sh@10 -- # set +x 00:17:20.910 21:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.910 21:20:09 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:20.910 21:20:09 -- target/filesystem.sh@101 -- # killprocess 78558 00:17:20.910 21:20:09 -- common/autotest_common.sh@936 -- # '[' -z 78558 ']' 00:17:20.910 21:20:09 -- common/autotest_common.sh@940 -- # kill -0 78558 00:17:20.910 21:20:09 -- common/autotest_common.sh@941 -- # uname 00:17:20.910 21:20:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.910 21:20:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78558 00:17:20.910 21:20:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:20.910 21:20:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:20.910 21:20:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78558' 00:17:20.910 killing process with pid 78558 00:17:20.910 21:20:09 -- common/autotest_common.sh@955 -- # kill 78558 00:17:20.910 21:20:09 -- common/autotest_common.sh@960 -- # wait 78558 00:17:21.169 21:20:10 -- target/filesystem.sh@102 -- # nvmfpid= 00:17:21.169 00:17:21.169 real 0m8.759s 00:17:21.169 user 0m33.789s 00:17:21.169 sys 0m1.354s 00:17:21.169 21:20:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:21.169 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:17:21.169 ************************************ 00:17:21.169 END TEST nvmf_filesystem_in_capsule 00:17:21.169 ************************************ 00:17:21.169 21:20:10 -- target/filesystem.sh@108 -- # nvmftestfini 00:17:21.169 21:20:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:21.169 21:20:10 -- nvmf/common.sh@117 -- # sync 00:17:21.428 21:20:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.428 21:20:10 -- nvmf/common.sh@120 -- # set +e 00:17:21.428 21:20:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.428 21:20:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.428 rmmod nvme_tcp 00:17:21.428 rmmod nvme_fabrics 00:17:21.428 rmmod nvme_keyring 00:17:21.428 21:20:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.428 21:20:10 -- nvmf/common.sh@124 -- # set -e 00:17:21.428 21:20:10 -- nvmf/common.sh@125 -- # return 0 00:17:21.428 21:20:10 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:21.428 21:20:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:21.428 21:20:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:21.428 21:20:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:21.428 21:20:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.428 21:20:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.428 21:20:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.428 21:20:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.428 21:20:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.428 21:20:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:21.428 00:17:21.428 real 0m19.096s 00:17:21.428 user 1m9.694s 00:17:21.428 sys 0m3.236s 00:17:21.428 21:20:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:21.428 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:17:21.428 ************************************ 00:17:21.428 END TEST nvmf_filesystem 00:17:21.428 ************************************ 00:17:21.428 21:20:10 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:21.428 21:20:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:21.428 21:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.428 21:20:10 -- common/autotest_common.sh@10 -- # set +x 00:17:21.687 ************************************ 00:17:21.687 START TEST nvmf_discovery 00:17:21.687 ************************************ 00:17:21.687 21:20:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:17:21.687 * Looking for test storage... 00:17:21.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:21.687 21:20:10 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:21.687 21:20:10 -- nvmf/common.sh@7 -- # uname -s 00:17:21.687 21:20:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.687 21:20:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.687 21:20:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.687 21:20:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.687 21:20:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.687 21:20:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.687 21:20:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.687 21:20:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.687 21:20:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.687 21:20:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.687 21:20:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:17:21.687 21:20:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:17:21.687 21:20:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.687 21:20:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.687 21:20:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:21.687 21:20:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.687 21:20:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:21.687 21:20:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.687 21:20:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.687 21:20:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.687 21:20:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.687 21:20:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.688 21:20:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.688 21:20:10 -- paths/export.sh@5 -- # export PATH 00:17:21.688 21:20:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.688 21:20:10 -- nvmf/common.sh@47 -- # : 0 00:17:21.688 21:20:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.688 21:20:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.688 21:20:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.688 21:20:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.688 21:20:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.688 21:20:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.688 21:20:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.688 21:20:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.688 21:20:10 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:17:21.688 21:20:10 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:17:21.688 21:20:10 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:17:21.688 21:20:10 -- target/discovery.sh@15 -- # hash nvme 00:17:21.688 21:20:10 -- target/discovery.sh@20 -- # nvmftestinit 00:17:21.688 21:20:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:21.688 21:20:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.688 21:20:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:21.688 21:20:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:21.688 21:20:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:21.688 21:20:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.688 21:20:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.688 21:20:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.688 21:20:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:21.688 21:20:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:21.688 21:20:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:21.688 21:20:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:21.688 21:20:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:21.688 21:20:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:21.688 21:20:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.688 21:20:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.688 21:20:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:21.688 21:20:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:21.688 21:20:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:21.688 21:20:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:21.688 21:20:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:21.688 21:20:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.688 21:20:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:21.688 21:20:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:21.688 21:20:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:21.688 21:20:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:21.688 21:20:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:21.688 21:20:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:21.688 Cannot find device "nvmf_tgt_br" 00:17:21.688 21:20:10 -- nvmf/common.sh@155 -- # true 00:17:21.688 21:20:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:21.688 Cannot find device "nvmf_tgt_br2" 00:17:21.688 21:20:10 -- nvmf/common.sh@156 -- # true 00:17:21.688 21:20:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:21.688 21:20:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:21.947 Cannot find device "nvmf_tgt_br" 00:17:21.947 21:20:10 -- nvmf/common.sh@158 -- # true 00:17:21.947 21:20:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:21.947 Cannot find device "nvmf_tgt_br2" 00:17:21.947 21:20:10 -- nvmf/common.sh@159 -- # true 00:17:21.947 21:20:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:21.947 21:20:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:21.947 21:20:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:21.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.947 21:20:11 -- nvmf/common.sh@162 -- # true 00:17:21.947 21:20:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:21.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:21.947 21:20:11 -- nvmf/common.sh@163 -- # true 00:17:21.947 21:20:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:21.947 21:20:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:21.947 21:20:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:21.947 21:20:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:21.947 21:20:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:21.947 21:20:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:21.947 21:20:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:21.947 21:20:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:21.947 21:20:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:21.947 21:20:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:21.947 21:20:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:21.947 21:20:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:21.947 21:20:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:21.947 21:20:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:21.947 21:20:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:21.947 21:20:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:21.947 21:20:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:21.947 21:20:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:21.947 21:20:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:22.206 21:20:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:22.206 21:20:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:22.206 21:20:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:22.206 21:20:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.206 21:20:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:22.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:17:22.207 00:17:22.207 --- 10.0.0.2 ping statistics --- 00:17:22.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.207 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:22.207 21:20:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:22.207 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.207 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:17:22.207 00:17:22.207 --- 10.0.0.3 ping statistics --- 00:17:22.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.207 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:22.207 21:20:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:22.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:17:22.207 00:17:22.207 --- 10.0.0.1 ping statistics --- 00:17:22.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.207 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:22.207 21:20:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.207 21:20:11 -- nvmf/common.sh@422 -- # return 0 00:17:22.207 21:20:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:22.207 21:20:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.207 21:20:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:22.207 21:20:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:22.207 21:20:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.207 21:20:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:22.207 21:20:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:22.207 21:20:11 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:17:22.207 21:20:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:22.207 21:20:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:22.207 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.207 21:20:11 -- nvmf/common.sh@470 -- # nvmfpid=79033 00:17:22.207 21:20:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:22.207 21:20:11 -- nvmf/common.sh@471 -- # waitforlisten 79033 00:17:22.207 21:20:11 -- common/autotest_common.sh@817 -- # '[' -z 79033 ']' 00:17:22.207 21:20:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.207 21:20:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:22.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.207 21:20:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.207 21:20:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:22.207 21:20:11 -- common/autotest_common.sh@10 -- # set +x 00:17:22.207 [2024-04-26 21:20:11.326709] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:22.207 [2024-04-26 21:20:11.326779] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.207 [2024-04-26 21:20:11.454932] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:22.466 [2024-04-26 21:20:11.506278] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.466 [2024-04-26 21:20:11.506327] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.466 [2024-04-26 21:20:11.506344] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.466 [2024-04-26 21:20:11.506350] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.466 [2024-04-26 21:20:11.506365] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.466 [2024-04-26 21:20:11.506505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.466 [2024-04-26 21:20:11.506583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.466 [2024-04-26 21:20:11.506783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.466 [2024-04-26 21:20:11.506786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.034 21:20:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:23.034 21:20:12 -- common/autotest_common.sh@850 -- # return 0 00:17:23.034 21:20:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:23.034 21:20:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:23.034 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.034 21:20:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.034 21:20:12 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:23.034 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.034 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 [2024-04-26 21:20:12.297243] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@26 -- # seq 1 4 00:17:23.293 21:20:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:23.293 21:20:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 Null1 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 [2024-04-26 21:20:12.372033] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:23.293 21:20:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 Null2 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:23.293 21:20:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 Null3 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:17:23.293 21:20:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 Null4 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.293 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.293 21:20:12 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:17:23.293 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.293 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.553 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.554 21:20:12 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 4420 00:17:23.554 00:17:23.554 Discovery Log Number of Records 6, Generation counter 6 00:17:23.554 =====Discovery Log Entry 0====== 00:17:23.554 trtype: tcp 00:17:23.554 adrfam: ipv4 00:17:23.554 subtype: current discovery subsystem 00:17:23.554 treq: not required 00:17:23.554 portid: 0 00:17:23.554 trsvcid: 4420 00:17:23.554 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:23.554 traddr: 10.0.0.2 00:17:23.554 eflags: explicit discovery connections, duplicate discovery information 00:17:23.554 sectype: none 00:17:23.554 =====Discovery Log Entry 1====== 00:17:23.554 trtype: tcp 00:17:23.554 adrfam: ipv4 00:17:23.554 subtype: nvme subsystem 00:17:23.554 treq: not required 00:17:23.554 portid: 0 00:17:23.554 trsvcid: 4420 00:17:23.554 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:23.554 traddr: 10.0.0.2 00:17:23.554 eflags: none 00:17:23.554 sectype: none 00:17:23.554 =====Discovery Log Entry 2====== 00:17:23.554 trtype: tcp 00:17:23.554 adrfam: ipv4 00:17:23.554 subtype: nvme subsystem 00:17:23.554 treq: not required 00:17:23.554 portid: 0 00:17:23.554 trsvcid: 4420 00:17:23.554 subnqn: nqn.2016-06.io.spdk:cnode2 00:17:23.554 traddr: 10.0.0.2 00:17:23.554 eflags: none 00:17:23.554 sectype: none 00:17:23.554 =====Discovery Log Entry 3====== 00:17:23.554 trtype: tcp 00:17:23.554 adrfam: ipv4 00:17:23.554 subtype: nvme subsystem 00:17:23.554 treq: not required 00:17:23.554 portid: 0 00:17:23.554 trsvcid: 4420 00:17:23.554 subnqn: nqn.2016-06.io.spdk:cnode3 00:17:23.554 traddr: 10.0.0.2 00:17:23.554 eflags: none 00:17:23.554 sectype: none 00:17:23.554 =====Discovery Log Entry 4====== 00:17:23.554 trtype: tcp 00:17:23.554 adrfam: ipv4 00:17:23.554 subtype: nvme subsystem 00:17:23.554 treq: not required 00:17:23.554 portid: 0 00:17:23.554 trsvcid: 4420 00:17:23.554 subnqn: nqn.2016-06.io.spdk:cnode4 00:17:23.554 traddr: 10.0.0.2 00:17:23.554 eflags: none 00:17:23.554 sectype: none 00:17:23.554 =====Discovery Log Entry 5====== 00:17:23.554 trtype: tcp 00:17:23.554 adrfam: ipv4 00:17:23.554 subtype: discovery subsystem referral 00:17:23.554 treq: not required 00:17:23.554 portid: 0 00:17:23.554 trsvcid: 4430 00:17:23.554 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:23.554 traddr: 10.0.0.2 00:17:23.554 eflags: none 00:17:23.554 sectype: none 00:17:23.554 21:20:12 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:17:23.554 Perform nvmf subsystem discovery via RPC 00:17:23.554 21:20:12 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:17:23.554 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.554 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 [2024-04-26 21:20:12.631600] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:17:23.554 [ 00:17:23.554 { 00:17:23.554 "allow_any_host": true, 00:17:23.554 "hosts": [], 00:17:23.554 "listen_addresses": [ 00:17:23.554 { 00:17:23.554 "adrfam": "IPv4", 00:17:23.554 "traddr": "10.0.0.2", 00:17:23.554 "transport": "TCP", 00:17:23.554 "trsvcid": "4420", 00:17:23.554 "trtype": "TCP" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:23.554 "subtype": "Discovery" 00:17:23.554 }, 00:17:23.554 { 00:17:23.554 "allow_any_host": true, 00:17:23.554 "hosts": [], 00:17:23.554 "listen_addresses": [ 00:17:23.554 { 00:17:23.554 "adrfam": "IPv4", 00:17:23.554 "traddr": "10.0.0.2", 00:17:23.554 "transport": "TCP", 00:17:23.554 "trsvcid": "4420", 00:17:23.554 "trtype": "TCP" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "max_cntlid": 65519, 00:17:23.554 "max_namespaces": 32, 00:17:23.554 "min_cntlid": 1, 00:17:23.554 "model_number": "SPDK bdev Controller", 00:17:23.554 "namespaces": [ 00:17:23.554 { 00:17:23.554 "bdev_name": "Null1", 00:17:23.554 "name": "Null1", 00:17:23.554 "nguid": "08344BB66B1049728F48FDF574B53C2C", 00:17:23.554 "nsid": 1, 00:17:23.554 "uuid": "08344bb6-6b10-4972-8f48-fdf574b53c2c" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.554 "serial_number": "SPDK00000000000001", 00:17:23.554 "subtype": "NVMe" 00:17:23.554 }, 00:17:23.554 { 00:17:23.554 "allow_any_host": true, 00:17:23.554 "hosts": [], 00:17:23.554 "listen_addresses": [ 00:17:23.554 { 00:17:23.554 "adrfam": "IPv4", 00:17:23.554 "traddr": "10.0.0.2", 00:17:23.554 "transport": "TCP", 00:17:23.554 "trsvcid": "4420", 00:17:23.554 "trtype": "TCP" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "max_cntlid": 65519, 00:17:23.554 "max_namespaces": 32, 00:17:23.554 "min_cntlid": 1, 00:17:23.554 "model_number": "SPDK bdev Controller", 00:17:23.554 "namespaces": [ 00:17:23.554 { 00:17:23.554 "bdev_name": "Null2", 00:17:23.554 "name": "Null2", 00:17:23.554 "nguid": "B560AE18DF2A409686D306B5E3A60CC5", 00:17:23.554 "nsid": 1, 00:17:23.554 "uuid": "b560ae18-df2a-4096-86d3-06b5e3a60cc5" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:23.554 "serial_number": "SPDK00000000000002", 00:17:23.554 "subtype": "NVMe" 00:17:23.554 }, 00:17:23.554 { 00:17:23.554 "allow_any_host": true, 00:17:23.554 "hosts": [], 00:17:23.554 "listen_addresses": [ 00:17:23.554 { 00:17:23.554 "adrfam": "IPv4", 00:17:23.554 "traddr": "10.0.0.2", 00:17:23.554 "transport": "TCP", 00:17:23.554 "trsvcid": "4420", 00:17:23.554 "trtype": "TCP" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "max_cntlid": 65519, 00:17:23.554 "max_namespaces": 32, 00:17:23.554 "min_cntlid": 1, 00:17:23.554 "model_number": "SPDK bdev Controller", 00:17:23.554 "namespaces": [ 00:17:23.554 { 00:17:23.554 "bdev_name": "Null3", 00:17:23.554 "name": "Null3", 00:17:23.554 "nguid": "5DC8802447CF48D2A8EEED80289A1828", 00:17:23.554 "nsid": 1, 00:17:23.554 "uuid": "5dc88024-47cf-48d2-a8ee-ed80289a1828" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:17:23.554 "serial_number": "SPDK00000000000003", 00:17:23.554 "subtype": "NVMe" 00:17:23.554 }, 00:17:23.554 { 00:17:23.554 "allow_any_host": true, 00:17:23.554 "hosts": [], 00:17:23.554 "listen_addresses": [ 00:17:23.554 { 00:17:23.554 "adrfam": "IPv4", 00:17:23.554 "traddr": "10.0.0.2", 00:17:23.554 "transport": "TCP", 00:17:23.554 "trsvcid": "4420", 00:17:23.554 "trtype": "TCP" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "max_cntlid": 65519, 00:17:23.554 "max_namespaces": 32, 00:17:23.554 "min_cntlid": 1, 00:17:23.554 "model_number": "SPDK bdev Controller", 00:17:23.554 "namespaces": [ 00:17:23.554 { 00:17:23.554 "bdev_name": "Null4", 00:17:23.554 "name": "Null4", 00:17:23.554 "nguid": "AB5075F58B36400CB5B6BDF022407672", 00:17:23.554 "nsid": 1, 00:17:23.554 "uuid": "ab5075f5-8b36-400c-b5b6-bdf022407672" 00:17:23.554 } 00:17:23.554 ], 00:17:23.554 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:17:23.554 "serial_number": "SPDK00000000000004", 00:17:23.554 "subtype": "NVMe" 00:17:23.554 } 00:17:23.554 ] 00:17:23.554 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.554 21:20:12 -- target/discovery.sh@42 -- # seq 1 4 00:17:23.554 21:20:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:23.554 21:20:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.554 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.554 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.554 21:20:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:17:23.554 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.554 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.554 21:20:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:23.554 21:20:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:23.554 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.554 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.554 21:20:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:17:23.554 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.554 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.554 21:20:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:23.554 21:20:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:23.554 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.554 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.554 21:20:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:17:23.554 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.554 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.554 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.554 21:20:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:17:23.554 21:20:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:23.554 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.554 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.555 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.555 21:20:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:17:23.555 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.555 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.555 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.555 21:20:12 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:17:23.555 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.555 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.555 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.555 21:20:12 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:17:23.555 21:20:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.555 21:20:12 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:17:23.555 21:20:12 -- common/autotest_common.sh@10 -- # set +x 00:17:23.555 21:20:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.814 21:20:12 -- target/discovery.sh@49 -- # check_bdevs= 00:17:23.814 21:20:12 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:17:23.814 21:20:12 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:17:23.814 21:20:12 -- target/discovery.sh@57 -- # nvmftestfini 00:17:23.814 21:20:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:23.814 21:20:12 -- nvmf/common.sh@117 -- # sync 00:17:23.814 21:20:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.814 21:20:12 -- nvmf/common.sh@120 -- # set +e 00:17:23.814 21:20:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.814 21:20:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.814 rmmod nvme_tcp 00:17:23.814 rmmod nvme_fabrics 00:17:23.814 rmmod nvme_keyring 00:17:23.814 21:20:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.814 21:20:12 -- nvmf/common.sh@124 -- # set -e 00:17:23.814 21:20:12 -- nvmf/common.sh@125 -- # return 0 00:17:23.814 21:20:12 -- nvmf/common.sh@478 -- # '[' -n 79033 ']' 00:17:23.814 21:20:12 -- nvmf/common.sh@479 -- # killprocess 79033 00:17:23.814 21:20:12 -- common/autotest_common.sh@936 -- # '[' -z 79033 ']' 00:17:23.814 21:20:12 -- common/autotest_common.sh@940 -- # kill -0 79033 00:17:23.814 21:20:12 -- common/autotest_common.sh@941 -- # uname 00:17:23.814 21:20:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:23.814 21:20:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79033 00:17:23.814 21:20:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:23.814 killing process with pid 79033 00:17:23.814 21:20:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:23.814 21:20:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79033' 00:17:23.814 21:20:12 -- common/autotest_common.sh@955 -- # kill 79033 00:17:23.814 [2024-04-26 21:20:12.979517] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:17:23.814 21:20:12 -- common/autotest_common.sh@960 -- # wait 79033 00:17:24.073 21:20:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:24.073 21:20:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:24.073 21:20:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:24.073 21:20:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.073 21:20:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.073 21:20:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.073 21:20:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.073 21:20:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.073 21:20:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:24.073 00:17:24.073 real 0m2.517s 00:17:24.073 user 0m6.967s 00:17:24.073 sys 0m0.666s 00:17:24.073 21:20:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.073 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.073 ************************************ 00:17:24.073 END TEST nvmf_discovery 00:17:24.073 ************************************ 00:17:24.073 21:20:13 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:24.073 21:20:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:24.073 21:20:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:24.073 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.332 ************************************ 00:17:24.332 START TEST nvmf_referrals 00:17:24.332 ************************************ 00:17:24.332 21:20:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:17:24.332 * Looking for test storage... 00:17:24.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:24.332 21:20:13 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:24.332 21:20:13 -- nvmf/common.sh@7 -- # uname -s 00:17:24.332 21:20:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.332 21:20:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.332 21:20:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.332 21:20:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.332 21:20:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.332 21:20:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.332 21:20:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.332 21:20:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.332 21:20:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.332 21:20:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.332 21:20:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:17:24.332 21:20:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:17:24.332 21:20:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.332 21:20:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.332 21:20:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:24.332 21:20:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.332 21:20:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.332 21:20:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.332 21:20:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.332 21:20:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.332 21:20:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.332 21:20:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.332 21:20:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.332 21:20:13 -- paths/export.sh@5 -- # export PATH 00:17:24.332 21:20:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.332 21:20:13 -- nvmf/common.sh@47 -- # : 0 00:17:24.332 21:20:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.332 21:20:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.332 21:20:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.332 21:20:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.332 21:20:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.332 21:20:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.332 21:20:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.332 21:20:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.332 21:20:13 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:17:24.332 21:20:13 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:17:24.332 21:20:13 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:17:24.332 21:20:13 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:17:24.332 21:20:13 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:24.332 21:20:13 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:24.332 21:20:13 -- target/referrals.sh@37 -- # nvmftestinit 00:17:24.332 21:20:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:24.332 21:20:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.332 21:20:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:24.332 21:20:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:24.332 21:20:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:24.332 21:20:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.332 21:20:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.332 21:20:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.332 21:20:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:24.332 21:20:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:24.332 21:20:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:24.332 21:20:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:24.332 21:20:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:24.332 21:20:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:24.332 21:20:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.332 21:20:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.332 21:20:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:24.332 21:20:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:24.332 21:20:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:24.332 21:20:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:24.332 21:20:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:24.332 21:20:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.332 21:20:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:24.332 21:20:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:24.332 21:20:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:24.332 21:20:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:24.332 21:20:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:24.332 21:20:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:24.332 Cannot find device "nvmf_tgt_br" 00:17:24.591 21:20:13 -- nvmf/common.sh@155 -- # true 00:17:24.591 21:20:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:24.591 Cannot find device "nvmf_tgt_br2" 00:17:24.591 21:20:13 -- nvmf/common.sh@156 -- # true 00:17:24.591 21:20:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:24.591 21:20:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:24.591 Cannot find device "nvmf_tgt_br" 00:17:24.591 21:20:13 -- nvmf/common.sh@158 -- # true 00:17:24.591 21:20:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:24.591 Cannot find device "nvmf_tgt_br2" 00:17:24.591 21:20:13 -- nvmf/common.sh@159 -- # true 00:17:24.591 21:20:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:24.591 21:20:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:24.591 21:20:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:24.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.591 21:20:13 -- nvmf/common.sh@162 -- # true 00:17:24.591 21:20:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.591 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.591 21:20:13 -- nvmf/common.sh@163 -- # true 00:17:24.591 21:20:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:24.591 21:20:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:24.591 21:20:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:24.591 21:20:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:24.591 21:20:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:24.591 21:20:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:24.591 21:20:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:24.591 21:20:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:24.591 21:20:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:24.591 21:20:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:24.591 21:20:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:24.591 21:20:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:24.591 21:20:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:24.591 21:20:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:24.591 21:20:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:24.591 21:20:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:24.591 21:20:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:24.591 21:20:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:24.850 21:20:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:24.850 21:20:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:24.850 21:20:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:24.850 21:20:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:24.850 21:20:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:24.850 21:20:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:24.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:17:24.850 00:17:24.850 --- 10.0.0.2 ping statistics --- 00:17:24.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.850 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:24.850 21:20:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:24.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:24.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:17:24.850 00:17:24.850 --- 10.0.0.3 ping statistics --- 00:17:24.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.850 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:17:24.850 21:20:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:24.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:17:24.850 00:17:24.850 --- 10.0.0.1 ping statistics --- 00:17:24.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.850 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:17:24.850 21:20:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.850 21:20:13 -- nvmf/common.sh@422 -- # return 0 00:17:24.850 21:20:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:24.850 21:20:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.850 21:20:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:24.850 21:20:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:24.850 21:20:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.850 21:20:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:24.851 21:20:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:24.851 21:20:13 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:17:24.851 21:20:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:24.851 21:20:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:24.851 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.851 21:20:13 -- nvmf/common.sh@470 -- # nvmfpid=79269 00:17:24.851 21:20:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:24.851 21:20:13 -- nvmf/common.sh@471 -- # waitforlisten 79269 00:17:24.851 21:20:13 -- common/autotest_common.sh@817 -- # '[' -z 79269 ']' 00:17:24.851 21:20:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.851 21:20:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:24.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.851 21:20:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.851 21:20:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:24.851 21:20:13 -- common/autotest_common.sh@10 -- # set +x 00:17:24.851 [2024-04-26 21:20:14.010815] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:24.851 [2024-04-26 21:20:14.010907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.109 [2024-04-26 21:20:14.153304] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:25.110 [2024-04-26 21:20:14.205652] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.110 [2024-04-26 21:20:14.205703] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.110 [2024-04-26 21:20:14.205710] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.110 [2024-04-26 21:20:14.205715] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.110 [2024-04-26 21:20:14.205721] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.110 [2024-04-26 21:20:14.205837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.110 [2024-04-26 21:20:14.206124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.110 [2024-04-26 21:20:14.206196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.110 [2024-04-26 21:20:14.206198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.676 21:20:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:25.676 21:20:14 -- common/autotest_common.sh@850 -- # return 0 00:17:25.676 21:20:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:25.676 21:20:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:25.676 21:20:14 -- common/autotest_common.sh@10 -- # set +x 00:17:25.934 21:20:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.934 21:20:14 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.934 21:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.934 21:20:14 -- common/autotest_common.sh@10 -- # set +x 00:17:25.934 [2024-04-26 21:20:14.963355] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.934 21:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.934 21:20:14 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:17:25.934 21:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.934 21:20:14 -- common/autotest_common.sh@10 -- # set +x 00:17:25.934 [2024-04-26 21:20:14.986255] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:25.934 21:20:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.934 21:20:14 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:17:25.934 21:20:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.934 21:20:14 -- common/autotest_common.sh@10 -- # set +x 00:17:25.935 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.935 21:20:15 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:17:25.935 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.935 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:25.935 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.935 21:20:15 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:17:25.935 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.935 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:25.935 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.935 21:20:15 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:25.935 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.935 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:25.935 21:20:15 -- target/referrals.sh@48 -- # jq length 00:17:25.935 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.935 21:20:15 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:17:25.935 21:20:15 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:17:25.935 21:20:15 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:25.935 21:20:15 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:25.935 21:20:15 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:25.935 21:20:15 -- target/referrals.sh@21 -- # sort 00:17:25.935 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.935 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:25.935 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.935 21:20:15 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:25.935 21:20:15 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:25.935 21:20:15 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:17:25.935 21:20:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:25.935 21:20:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:25.935 21:20:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:25.935 21:20:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:25.935 21:20:15 -- target/referrals.sh@26 -- # sort 00:17:26.194 21:20:15 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:26.194 21:20:15 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:26.194 21:20:15 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:17:26.194 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.194 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:26.194 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.194 21:20:15 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:17:26.194 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.194 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:26.194 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.194 21:20:15 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:17:26.194 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.194 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:26.194 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.194 21:20:15 -- target/referrals.sh@56 -- # jq length 00:17:26.194 21:20:15 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:26.194 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.194 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:26.194 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.194 21:20:15 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:17:26.194 21:20:15 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:17:26.194 21:20:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:26.194 21:20:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:26.194 21:20:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.194 21:20:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:26.194 21:20:15 -- target/referrals.sh@26 -- # sort 00:17:26.194 21:20:15 -- target/referrals.sh@26 -- # echo 00:17:26.194 21:20:15 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:17:26.194 21:20:15 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:17:26.194 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.194 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:26.194 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.194 21:20:15 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:26.194 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.194 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:26.454 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.454 21:20:15 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:17:26.454 21:20:15 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:26.454 21:20:15 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:26.454 21:20:15 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:26.454 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.454 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:26.454 21:20:15 -- target/referrals.sh@21 -- # sort 00:17:26.454 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.454 21:20:15 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:17:26.454 21:20:15 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:26.454 21:20:15 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:17:26.454 21:20:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:26.454 21:20:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:26.454 21:20:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.454 21:20:15 -- target/referrals.sh@26 -- # sort 00:17:26.454 21:20:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:26.454 21:20:15 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:17:26.454 21:20:15 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:26.454 21:20:15 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:17:26.454 21:20:15 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:26.454 21:20:15 -- target/referrals.sh@67 -- # jq -r .subnqn 00:17:26.454 21:20:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:26.454 21:20:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.454 21:20:15 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:26.454 21:20:15 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:17:26.454 21:20:15 -- target/referrals.sh@68 -- # jq -r .subnqn 00:17:26.454 21:20:15 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:26.454 21:20:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:26.454 21:20:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.729 21:20:15 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:26.729 21:20:15 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:26.729 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.729 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:26.729 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.729 21:20:15 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:17:26.729 21:20:15 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:26.729 21:20:15 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:26.729 21:20:15 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:26.729 21:20:15 -- target/referrals.sh@21 -- # sort 00:17:26.729 21:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.729 21:20:15 -- common/autotest_common.sh@10 -- # set +x 00:17:26.729 21:20:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.729 21:20:15 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:17:26.729 21:20:15 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:26.729 21:20:15 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:17:26.729 21:20:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:26.729 21:20:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:26.729 21:20:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.729 21:20:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:26.729 21:20:15 -- target/referrals.sh@26 -- # sort 00:17:26.729 21:20:15 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:17:26.729 21:20:15 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:26.729 21:20:15 -- target/referrals.sh@75 -- # jq -r .subnqn 00:17:26.729 21:20:15 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:17:26.729 21:20:15 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:26.729 21:20:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:26.729 21:20:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.729 21:20:15 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:17:26.729 21:20:15 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:17:26.729 21:20:15 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:26.729 21:20:15 -- target/referrals.sh@76 -- # jq -r .subnqn 00:17:26.729 21:20:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.729 21:20:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:26.989 21:20:16 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:26.989 21:20:16 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:17:26.989 21:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.989 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:26.989 21:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.989 21:20:16 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:26.989 21:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.989 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:26.989 21:20:16 -- target/referrals.sh@82 -- # jq length 00:17:26.989 21:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.989 21:20:16 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:17:26.989 21:20:16 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:17:26.989 21:20:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:26.989 21:20:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:26.990 21:20:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:26.990 21:20:16 -- target/referrals.sh@26 -- # sort 00:17:26.990 21:20:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:26.990 21:20:16 -- target/referrals.sh@26 -- # echo 00:17:26.990 21:20:16 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:17:26.990 21:20:16 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:17:26.990 21:20:16 -- target/referrals.sh@86 -- # nvmftestfini 00:17:26.990 21:20:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:26.990 21:20:16 -- nvmf/common.sh@117 -- # sync 00:17:26.990 21:20:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.990 21:20:16 -- nvmf/common.sh@120 -- # set +e 00:17:26.990 21:20:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.990 21:20:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.990 rmmod nvme_tcp 00:17:26.990 rmmod nvme_fabrics 00:17:27.250 rmmod nvme_keyring 00:17:27.250 21:20:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:27.250 21:20:16 -- nvmf/common.sh@124 -- # set -e 00:17:27.250 21:20:16 -- nvmf/common.sh@125 -- # return 0 00:17:27.250 21:20:16 -- nvmf/common.sh@478 -- # '[' -n 79269 ']' 00:17:27.250 21:20:16 -- nvmf/common.sh@479 -- # killprocess 79269 00:17:27.250 21:20:16 -- common/autotest_common.sh@936 -- # '[' -z 79269 ']' 00:17:27.250 21:20:16 -- common/autotest_common.sh@940 -- # kill -0 79269 00:17:27.250 21:20:16 -- common/autotest_common.sh@941 -- # uname 00:17:27.250 21:20:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.250 21:20:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79269 00:17:27.250 21:20:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:27.250 21:20:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:27.250 killing process with pid 79269 00:17:27.250 21:20:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79269' 00:17:27.250 21:20:16 -- common/autotest_common.sh@955 -- # kill 79269 00:17:27.250 21:20:16 -- common/autotest_common.sh@960 -- # wait 79269 00:17:27.250 21:20:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:27.250 21:20:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:27.250 21:20:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:27.250 21:20:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:27.250 21:20:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:27.250 21:20:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.250 21:20:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.250 21:20:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.509 21:20:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:27.509 00:17:27.509 real 0m3.184s 00:17:27.509 user 0m10.218s 00:17:27.509 sys 0m0.982s 00:17:27.509 21:20:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:27.509 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:27.509 ************************************ 00:17:27.509 END TEST nvmf_referrals 00:17:27.509 ************************************ 00:17:27.509 21:20:16 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:27.509 21:20:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:27.509 21:20:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:27.509 21:20:16 -- common/autotest_common.sh@10 -- # set +x 00:17:27.509 ************************************ 00:17:27.509 START TEST nvmf_connect_disconnect 00:17:27.509 ************************************ 00:17:27.509 21:20:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:27.768 * Looking for test storage... 00:17:27.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:27.768 21:20:16 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.768 21:20:16 -- nvmf/common.sh@7 -- # uname -s 00:17:27.768 21:20:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.768 21:20:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.768 21:20:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.768 21:20:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.768 21:20:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.768 21:20:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.768 21:20:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.768 21:20:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.768 21:20:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.768 21:20:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.768 21:20:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:17:27.768 21:20:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:17:27.768 21:20:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.768 21:20:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.768 21:20:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.768 21:20:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.768 21:20:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.768 21:20:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.768 21:20:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.768 21:20:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.768 21:20:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.768 21:20:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.768 21:20:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.768 21:20:16 -- paths/export.sh@5 -- # export PATH 00:17:27.768 21:20:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.768 21:20:16 -- nvmf/common.sh@47 -- # : 0 00:17:27.768 21:20:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:27.768 21:20:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:27.768 21:20:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.768 21:20:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.768 21:20:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.768 21:20:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:27.768 21:20:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:27.768 21:20:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:27.768 21:20:16 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.768 21:20:16 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.768 21:20:16 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:27.768 21:20:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:27.768 21:20:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.768 21:20:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:27.768 21:20:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:27.768 21:20:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:27.768 21:20:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.768 21:20:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:27.768 21:20:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.768 21:20:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:27.768 21:20:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:27.769 21:20:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:27.769 21:20:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:27.769 21:20:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:27.769 21:20:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:27.769 21:20:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.769 21:20:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.769 21:20:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:27.769 21:20:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:27.769 21:20:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.769 21:20:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.769 21:20:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.769 21:20:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.769 21:20:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.769 21:20:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.769 21:20:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.769 21:20:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.769 21:20:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:27.769 21:20:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:27.769 Cannot find device "nvmf_tgt_br" 00:17:27.769 21:20:16 -- nvmf/common.sh@155 -- # true 00:17:27.769 21:20:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.769 Cannot find device "nvmf_tgt_br2" 00:17:27.769 21:20:16 -- nvmf/common.sh@156 -- # true 00:17:27.769 21:20:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:27.769 21:20:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:27.769 Cannot find device "nvmf_tgt_br" 00:17:27.769 21:20:16 -- nvmf/common.sh@158 -- # true 00:17:27.769 21:20:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:27.769 Cannot find device "nvmf_tgt_br2" 00:17:27.769 21:20:16 -- nvmf/common.sh@159 -- # true 00:17:27.769 21:20:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:27.769 21:20:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:27.769 21:20:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.769 21:20:17 -- nvmf/common.sh@162 -- # true 00:17:27.769 21:20:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.769 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:27.769 21:20:17 -- nvmf/common.sh@163 -- # true 00:17:27.769 21:20:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:27.769 21:20:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.028 21:20:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.028 21:20:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.028 21:20:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.028 21:20:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.028 21:20:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.028 21:20:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:28.028 21:20:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:28.028 21:20:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:28.028 21:20:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:28.028 21:20:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:28.028 21:20:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:28.028 21:20:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.028 21:20:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.028 21:20:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.028 21:20:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:28.028 21:20:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:28.028 21:20:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.028 21:20:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.028 21:20:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.028 21:20:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.028 21:20:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.028 21:20:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:28.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:17:28.028 00:17:28.028 --- 10.0.0.2 ping statistics --- 00:17:28.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.028 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:17:28.028 21:20:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:28.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:28.028 00:17:28.028 --- 10.0.0.3 ping statistics --- 00:17:28.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.028 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:28.028 21:20:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:17:28.028 00:17:28.028 --- 10.0.0.1 ping statistics --- 00:17:28.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.028 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:28.028 21:20:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.028 21:20:17 -- nvmf/common.sh@422 -- # return 0 00:17:28.028 21:20:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:28.028 21:20:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.028 21:20:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:28.028 21:20:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:28.028 21:20:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.028 21:20:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:28.028 21:20:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:28.028 21:20:17 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:28.028 21:20:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:28.028 21:20:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:28.028 21:20:17 -- common/autotest_common.sh@10 -- # set +x 00:17:28.028 21:20:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:28.028 21:20:17 -- nvmf/common.sh@470 -- # nvmfpid=79581 00:17:28.028 21:20:17 -- nvmf/common.sh@471 -- # waitforlisten 79581 00:17:28.028 21:20:17 -- common/autotest_common.sh@817 -- # '[' -z 79581 ']' 00:17:28.028 21:20:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.028 21:20:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:28.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.028 21:20:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.028 21:20:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:28.028 21:20:17 -- common/autotest_common.sh@10 -- # set +x 00:17:28.028 [2024-04-26 21:20:17.272787] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:28.028 [2024-04-26 21:20:17.272856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.295 [2024-04-26 21:20:17.403938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.295 [2024-04-26 21:20:17.467761] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.295 [2024-04-26 21:20:17.467813] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.295 [2024-04-26 21:20:17.467820] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.295 [2024-04-26 21:20:17.467826] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.295 [2024-04-26 21:20:17.467831] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.295 [2024-04-26 21:20:17.467906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.296 [2024-04-26 21:20:17.468274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.296 [2024-04-26 21:20:17.468386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.296 [2024-04-26 21:20:17.468391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.233 21:20:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:29.233 21:20:18 -- common/autotest_common.sh@850 -- # return 0 00:17:29.233 21:20:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:29.233 21:20:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:29.233 21:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:29.233 21:20:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:29.233 21:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.233 21:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:29.233 [2024-04-26 21:20:18.274520] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.233 21:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:29.233 21:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.233 21:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:29.233 21:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:29.233 21:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.233 21:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:29.233 21:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:29.233 21:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.233 21:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:29.233 21:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.233 21:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.233 21:20:18 -- common/autotest_common.sh@10 -- # set +x 00:17:29.233 [2024-04-26 21:20:18.352569] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.233 21:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:17:29.233 21:20:18 -- target/connect_disconnect.sh@34 -- # set +x 00:17:31.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:45.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:07.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:11.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:27.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:29.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:47.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:51.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:56.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:59.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:00.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:10.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:16.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:20.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:23.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:25.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:27.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:32.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:34.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:39.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:41.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:43.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:45.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:48.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:50.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:52.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:54.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:57.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:59.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:01.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:03.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:06.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:08.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:10.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:12.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:15.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:17.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:19.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:21.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:23.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:26.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:28.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:30.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:32.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:34.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:37.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:39.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:41.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:44.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:46.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:48.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:50.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:52.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:55.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:57.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:59.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:01.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:04.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:06.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:08.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:10.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:13.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:13.065 21:24:01 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:21:13.065 21:24:01 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:21:13.065 21:24:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:13.065 21:24:01 -- nvmf/common.sh@117 -- # sync 00:21:13.065 21:24:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:13.065 21:24:01 -- nvmf/common.sh@120 -- # set +e 00:21:13.065 21:24:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:13.065 21:24:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:13.065 rmmod nvme_tcp 00:21:13.065 rmmod nvme_fabrics 00:21:13.065 rmmod nvme_keyring 00:21:13.065 21:24:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:13.065 21:24:01 -- nvmf/common.sh@124 -- # set -e 00:21:13.065 21:24:01 -- nvmf/common.sh@125 -- # return 0 00:21:13.065 21:24:01 -- nvmf/common.sh@478 -- # '[' -n 79581 ']' 00:21:13.065 21:24:01 -- nvmf/common.sh@479 -- # killprocess 79581 00:21:13.065 21:24:01 -- common/autotest_common.sh@936 -- # '[' -z 79581 ']' 00:21:13.065 21:24:01 -- common/autotest_common.sh@940 -- # kill -0 79581 00:21:13.065 21:24:01 -- common/autotest_common.sh@941 -- # uname 00:21:13.065 21:24:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.065 21:24:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79581 00:21:13.065 21:24:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:13.065 21:24:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:13.065 killing process with pid 79581 00:21:13.065 21:24:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79581' 00:21:13.065 21:24:01 -- common/autotest_common.sh@955 -- # kill 79581 00:21:13.065 21:24:01 -- common/autotest_common.sh@960 -- # wait 79581 00:21:13.065 21:24:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:13.065 21:24:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:13.065 21:24:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:13.065 21:24:02 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.065 21:24:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:13.065 21:24:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.065 21:24:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.065 21:24:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.065 21:24:02 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:13.065 00:21:13.065 real 3m45.571s 00:21:13.065 user 14m48.728s 00:21:13.065 sys 0m15.387s 00:21:13.065 21:24:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:13.065 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:21:13.065 ************************************ 00:21:13.065 END TEST nvmf_connect_disconnect 00:21:13.065 ************************************ 00:21:13.065 21:24:02 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:21:13.065 21:24:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:13.065 21:24:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:13.065 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:21:13.324 ************************************ 00:21:13.324 START TEST nvmf_multitarget 00:21:13.324 ************************************ 00:21:13.324 21:24:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:21:13.324 * Looking for test storage... 00:21:13.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:13.324 21:24:02 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.324 21:24:02 -- nvmf/common.sh@7 -- # uname -s 00:21:13.324 21:24:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.324 21:24:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.324 21:24:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.324 21:24:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.324 21:24:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.324 21:24:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.324 21:24:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.324 21:24:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.324 21:24:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.324 21:24:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.324 21:24:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:21:13.324 21:24:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:21:13.324 21:24:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.324 21:24:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.324 21:24:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:13.324 21:24:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.324 21:24:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.324 21:24:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.324 21:24:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.324 21:24:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.324 21:24:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.324 21:24:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.324 21:24:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.324 21:24:02 -- paths/export.sh@5 -- # export PATH 00:21:13.324 21:24:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.324 21:24:02 -- nvmf/common.sh@47 -- # : 0 00:21:13.324 21:24:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.324 21:24:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.324 21:24:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.324 21:24:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.324 21:24:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.324 21:24:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.324 21:24:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.324 21:24:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.324 21:24:02 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:21:13.324 21:24:02 -- target/multitarget.sh@15 -- # nvmftestinit 00:21:13.324 21:24:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:13.324 21:24:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.324 21:24:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:13.324 21:24:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:13.324 21:24:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:13.324 21:24:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.324 21:24:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.324 21:24:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.324 21:24:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:13.324 21:24:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:13.324 21:24:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:13.324 21:24:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:13.324 21:24:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:13.324 21:24:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:13.324 21:24:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.324 21:24:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.324 21:24:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:13.324 21:24:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:13.324 21:24:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:13.324 21:24:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:13.324 21:24:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:13.324 21:24:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.324 21:24:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:13.324 21:24:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:13.324 21:24:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:13.324 21:24:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:13.324 21:24:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:13.324 21:24:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:13.324 Cannot find device "nvmf_tgt_br" 00:21:13.324 21:24:02 -- nvmf/common.sh@155 -- # true 00:21:13.325 21:24:02 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:13.325 Cannot find device "nvmf_tgt_br2" 00:21:13.325 21:24:02 -- nvmf/common.sh@156 -- # true 00:21:13.325 21:24:02 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:13.325 21:24:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:13.583 Cannot find device "nvmf_tgt_br" 00:21:13.583 21:24:02 -- nvmf/common.sh@158 -- # true 00:21:13.583 21:24:02 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:13.583 Cannot find device "nvmf_tgt_br2" 00:21:13.583 21:24:02 -- nvmf/common.sh@159 -- # true 00:21:13.583 21:24:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:13.583 21:24:02 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:13.583 21:24:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:13.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:13.583 21:24:02 -- nvmf/common.sh@162 -- # true 00:21:13.583 21:24:02 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:13.583 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:13.583 21:24:02 -- nvmf/common.sh@163 -- # true 00:21:13.583 21:24:02 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:13.583 21:24:02 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:13.583 21:24:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:13.583 21:24:02 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:13.583 21:24:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:13.583 21:24:02 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:13.583 21:24:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:13.583 21:24:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:13.583 21:24:02 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:13.583 21:24:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:13.583 21:24:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:13.583 21:24:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:13.583 21:24:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:13.583 21:24:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:13.583 21:24:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:13.583 21:24:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:13.584 21:24:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:13.584 21:24:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:13.584 21:24:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:13.584 21:24:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:13.584 21:24:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:13.584 21:24:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:13.584 21:24:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:13.584 21:24:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:13.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:21:13.584 00:21:13.584 --- 10.0.0.2 ping statistics --- 00:21:13.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.584 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:13.584 21:24:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:13.584 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:13.584 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:21:13.584 00:21:13.584 --- 10.0.0.3 ping statistics --- 00:21:13.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.584 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:13.584 21:24:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:13.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:13.584 00:21:13.584 --- 10.0.0.1 ping statistics --- 00:21:13.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.584 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:13.584 21:24:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.584 21:24:02 -- nvmf/common.sh@422 -- # return 0 00:21:13.584 21:24:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:13.584 21:24:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.584 21:24:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:13.584 21:24:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:13.584 21:24:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.584 21:24:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:13.584 21:24:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:13.843 21:24:02 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:21:13.843 21:24:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:13.843 21:24:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:13.843 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:21:13.843 21:24:02 -- nvmf/common.sh@470 -- # nvmfpid=83351 00:21:13.843 21:24:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:13.843 21:24:02 -- nvmf/common.sh@471 -- # waitforlisten 83351 00:21:13.843 21:24:02 -- common/autotest_common.sh@817 -- # '[' -z 83351 ']' 00:21:13.843 21:24:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.843 21:24:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:13.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.843 21:24:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.843 21:24:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:13.843 21:24:02 -- common/autotest_common.sh@10 -- # set +x 00:21:13.843 [2024-04-26 21:24:02.885856] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:13.843 [2024-04-26 21:24:02.885940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.843 [2024-04-26 21:24:03.029572] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.843 [2024-04-26 21:24:03.083730] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.843 [2024-04-26 21:24:03.083787] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.843 [2024-04-26 21:24:03.083794] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.843 [2024-04-26 21:24:03.083800] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.843 [2024-04-26 21:24:03.083805] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.843 [2024-04-26 21:24:03.083979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.843 [2024-04-26 21:24:03.084151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.843 [2024-04-26 21:24:03.084156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.843 [2024-04-26 21:24:03.084104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.777 21:24:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:14.777 21:24:03 -- common/autotest_common.sh@850 -- # return 0 00:21:14.777 21:24:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:14.777 21:24:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:14.777 21:24:03 -- common/autotest_common.sh@10 -- # set +x 00:21:14.777 21:24:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.777 21:24:03 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:14.777 21:24:03 -- target/multitarget.sh@21 -- # jq length 00:21:14.777 21:24:03 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:14.777 21:24:03 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:21:14.777 21:24:03 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:21:15.035 "nvmf_tgt_1" 00:21:15.035 21:24:04 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:21:15.035 "nvmf_tgt_2" 00:21:15.035 21:24:04 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:15.035 21:24:04 -- target/multitarget.sh@28 -- # jq length 00:21:15.293 21:24:04 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:21:15.293 21:24:04 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:21:15.293 true 00:21:15.293 21:24:04 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:21:15.552 true 00:21:15.552 21:24:04 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:21:15.552 21:24:04 -- target/multitarget.sh@35 -- # jq length 00:21:15.552 21:24:04 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:21:15.552 21:24:04 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:21:15.552 21:24:04 -- target/multitarget.sh@41 -- # nvmftestfini 00:21:15.552 21:24:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:15.552 21:24:04 -- nvmf/common.sh@117 -- # sync 00:21:15.812 21:24:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:15.812 21:24:04 -- nvmf/common.sh@120 -- # set +e 00:21:15.812 21:24:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:15.812 21:24:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:15.812 rmmod nvme_tcp 00:21:15.812 rmmod nvme_fabrics 00:21:15.812 rmmod nvme_keyring 00:21:15.813 21:24:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:15.813 21:24:04 -- nvmf/common.sh@124 -- # set -e 00:21:15.813 21:24:04 -- nvmf/common.sh@125 -- # return 0 00:21:15.813 21:24:04 -- nvmf/common.sh@478 -- # '[' -n 83351 ']' 00:21:15.813 21:24:04 -- nvmf/common.sh@479 -- # killprocess 83351 00:21:15.813 21:24:04 -- common/autotest_common.sh@936 -- # '[' -z 83351 ']' 00:21:15.813 21:24:04 -- common/autotest_common.sh@940 -- # kill -0 83351 00:21:15.813 21:24:04 -- common/autotest_common.sh@941 -- # uname 00:21:15.813 21:24:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:15.813 21:24:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83351 00:21:15.813 21:24:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:15.813 21:24:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:15.813 killing process with pid 83351 00:21:15.813 21:24:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83351' 00:21:15.813 21:24:04 -- common/autotest_common.sh@955 -- # kill 83351 00:21:15.813 21:24:04 -- common/autotest_common.sh@960 -- # wait 83351 00:21:16.078 21:24:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:16.078 21:24:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:16.078 21:24:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:16.078 21:24:05 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.078 21:24:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:16.078 21:24:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.078 21:24:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.078 21:24:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.078 21:24:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:16.078 00:21:16.078 real 0m2.791s 00:21:16.078 user 0m9.302s 00:21:16.078 sys 0m0.632s 00:21:16.078 21:24:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:16.078 21:24:05 -- common/autotest_common.sh@10 -- # set +x 00:21:16.078 ************************************ 00:21:16.078 END TEST nvmf_multitarget 00:21:16.078 ************************************ 00:21:16.078 21:24:05 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:21:16.078 21:24:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:16.078 21:24:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:16.078 21:24:05 -- common/autotest_common.sh@10 -- # set +x 00:21:16.078 ************************************ 00:21:16.078 START TEST nvmf_rpc 00:21:16.078 ************************************ 00:21:16.078 21:24:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:21:16.335 * Looking for test storage... 00:21:16.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:16.336 21:24:05 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:16.336 21:24:05 -- nvmf/common.sh@7 -- # uname -s 00:21:16.336 21:24:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.336 21:24:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.336 21:24:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.336 21:24:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.336 21:24:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.336 21:24:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.336 21:24:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.336 21:24:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.336 21:24:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.336 21:24:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.336 21:24:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:21:16.336 21:24:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:21:16.336 21:24:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.336 21:24:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.336 21:24:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:16.336 21:24:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.336 21:24:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.336 21:24:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.336 21:24:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.336 21:24:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.336 21:24:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.336 21:24:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.336 21:24:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.336 21:24:05 -- paths/export.sh@5 -- # export PATH 00:21:16.336 21:24:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.336 21:24:05 -- nvmf/common.sh@47 -- # : 0 00:21:16.336 21:24:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.336 21:24:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.336 21:24:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.336 21:24:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.336 21:24:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.336 21:24:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.336 21:24:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.336 21:24:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.336 21:24:05 -- target/rpc.sh@11 -- # loops=5 00:21:16.336 21:24:05 -- target/rpc.sh@23 -- # nvmftestinit 00:21:16.336 21:24:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:16.336 21:24:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.336 21:24:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:16.336 21:24:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:16.336 21:24:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:16.336 21:24:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.336 21:24:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.336 21:24:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.336 21:24:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:16.336 21:24:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:16.336 21:24:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:16.336 21:24:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:16.336 21:24:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:16.336 21:24:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:16.336 21:24:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.336 21:24:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.336 21:24:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:16.336 21:24:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:16.336 21:24:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:16.336 21:24:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:16.336 21:24:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:16.336 21:24:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.336 21:24:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:16.336 21:24:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:16.336 21:24:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:16.336 21:24:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:16.336 21:24:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:16.336 21:24:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:16.336 Cannot find device "nvmf_tgt_br" 00:21:16.336 21:24:05 -- nvmf/common.sh@155 -- # true 00:21:16.336 21:24:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.336 Cannot find device "nvmf_tgt_br2" 00:21:16.336 21:24:05 -- nvmf/common.sh@156 -- # true 00:21:16.336 21:24:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:16.336 21:24:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:16.336 Cannot find device "nvmf_tgt_br" 00:21:16.336 21:24:05 -- nvmf/common.sh@158 -- # true 00:21:16.336 21:24:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:16.336 Cannot find device "nvmf_tgt_br2" 00:21:16.336 21:24:05 -- nvmf/common.sh@159 -- # true 00:21:16.336 21:24:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:16.336 21:24:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:16.336 21:24:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.336 21:24:05 -- nvmf/common.sh@162 -- # true 00:21:16.336 21:24:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.336 21:24:05 -- nvmf/common.sh@163 -- # true 00:21:16.336 21:24:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:16.336 21:24:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:16.336 21:24:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:16.336 21:24:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:16.336 21:24:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:16.336 21:24:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:16.595 21:24:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:16.595 21:24:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:16.595 21:24:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:16.595 21:24:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:16.595 21:24:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:16.595 21:24:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:16.595 21:24:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:16.595 21:24:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:16.595 21:24:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:16.595 21:24:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:16.595 21:24:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:16.595 21:24:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:16.595 21:24:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:16.595 21:24:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:16.595 21:24:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:16.595 21:24:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:16.595 21:24:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:16.595 21:24:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:16.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:21:16.595 00:21:16.595 --- 10.0.0.2 ping statistics --- 00:21:16.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.595 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:16.595 21:24:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:16.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:16.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:21:16.595 00:21:16.595 --- 10.0.0.3 ping statistics --- 00:21:16.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.595 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:16.595 21:24:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:16.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:21:16.595 00:21:16.595 --- 10.0.0.1 ping statistics --- 00:21:16.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.595 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:16.595 21:24:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.595 21:24:05 -- nvmf/common.sh@422 -- # return 0 00:21:16.595 21:24:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:16.595 21:24:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.595 21:24:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:16.595 21:24:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:16.595 21:24:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.595 21:24:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:16.595 21:24:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:16.595 21:24:05 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:21:16.595 21:24:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:16.595 21:24:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:16.595 21:24:05 -- common/autotest_common.sh@10 -- # set +x 00:21:16.595 21:24:05 -- nvmf/common.sh@470 -- # nvmfpid=83579 00:21:16.595 21:24:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:16.595 21:24:05 -- nvmf/common.sh@471 -- # waitforlisten 83579 00:21:16.595 21:24:05 -- common/autotest_common.sh@817 -- # '[' -z 83579 ']' 00:21:16.595 21:24:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.595 21:24:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:16.595 21:24:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.595 21:24:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:16.595 21:24:05 -- common/autotest_common.sh@10 -- # set +x 00:21:16.595 [2024-04-26 21:24:05.745542] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:16.595 [2024-04-26 21:24:05.745638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.854 [2024-04-26 21:24:05.889069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.854 [2024-04-26 21:24:05.964133] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.854 [2024-04-26 21:24:05.964192] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.854 [2024-04-26 21:24:05.964199] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.854 [2024-04-26 21:24:05.964205] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.854 [2024-04-26 21:24:05.964211] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.854 [2024-04-26 21:24:05.964383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.854 [2024-04-26 21:24:05.964456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.854 [2024-04-26 21:24:05.964641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.854 [2024-04-26 21:24:05.964650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.791 21:24:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:17.791 21:24:06 -- common/autotest_common.sh@850 -- # return 0 00:21:17.791 21:24:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:17.791 21:24:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:17.791 21:24:06 -- common/autotest_common.sh@10 -- # set +x 00:21:17.791 21:24:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.791 21:24:06 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:21:17.791 21:24:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.791 21:24:06 -- common/autotest_common.sh@10 -- # set +x 00:21:17.791 21:24:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.791 21:24:06 -- target/rpc.sh@26 -- # stats='{ 00:21:17.791 "poll_groups": [ 00:21:17.791 { 00:21:17.791 "admin_qpairs": 0, 00:21:17.791 "completed_nvme_io": 0, 00:21:17.791 "current_admin_qpairs": 0, 00:21:17.791 "current_io_qpairs": 0, 00:21:17.791 "io_qpairs": 0, 00:21:17.791 "name": "nvmf_tgt_poll_group_0", 00:21:17.791 "pending_bdev_io": 0, 00:21:17.791 "transports": [] 00:21:17.791 }, 00:21:17.791 { 00:21:17.791 "admin_qpairs": 0, 00:21:17.791 "completed_nvme_io": 0, 00:21:17.791 "current_admin_qpairs": 0, 00:21:17.791 "current_io_qpairs": 0, 00:21:17.791 "io_qpairs": 0, 00:21:17.791 "name": "nvmf_tgt_poll_group_1", 00:21:17.791 "pending_bdev_io": 0, 00:21:17.791 "transports": [] 00:21:17.791 }, 00:21:17.791 { 00:21:17.791 "admin_qpairs": 0, 00:21:17.791 "completed_nvme_io": 0, 00:21:17.791 "current_admin_qpairs": 0, 00:21:17.791 "current_io_qpairs": 0, 00:21:17.791 "io_qpairs": 0, 00:21:17.791 "name": "nvmf_tgt_poll_group_2", 00:21:17.791 "pending_bdev_io": 0, 00:21:17.791 "transports": [] 00:21:17.791 }, 00:21:17.791 { 00:21:17.791 "admin_qpairs": 0, 00:21:17.791 "completed_nvme_io": 0, 00:21:17.791 "current_admin_qpairs": 0, 00:21:17.791 "current_io_qpairs": 0, 00:21:17.791 "io_qpairs": 0, 00:21:17.791 "name": "nvmf_tgt_poll_group_3", 00:21:17.791 "pending_bdev_io": 0, 00:21:17.791 "transports": [] 00:21:17.791 } 00:21:17.791 ], 00:21:17.791 "tick_rate": 2290000000 00:21:17.791 }' 00:21:17.791 21:24:06 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:21:17.791 21:24:06 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:21:17.791 21:24:06 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:21:17.791 21:24:06 -- target/rpc.sh@15 -- # wc -l 00:21:17.791 21:24:06 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:21:17.791 21:24:06 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:21:17.791 21:24:06 -- target/rpc.sh@29 -- # [[ null == null ]] 00:21:17.791 21:24:06 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:17.791 21:24:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.791 21:24:06 -- common/autotest_common.sh@10 -- # set +x 00:21:17.791 [2024-04-26 21:24:06.854846] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.791 21:24:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.791 21:24:06 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:21:17.791 21:24:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.791 21:24:06 -- common/autotest_common.sh@10 -- # set +x 00:21:17.791 21:24:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.791 21:24:06 -- target/rpc.sh@33 -- # stats='{ 00:21:17.791 "poll_groups": [ 00:21:17.791 { 00:21:17.791 "admin_qpairs": 0, 00:21:17.791 "completed_nvme_io": 0, 00:21:17.791 "current_admin_qpairs": 0, 00:21:17.791 "current_io_qpairs": 0, 00:21:17.791 "io_qpairs": 0, 00:21:17.791 "name": "nvmf_tgt_poll_group_0", 00:21:17.791 "pending_bdev_io": 0, 00:21:17.791 "transports": [ 00:21:17.791 { 00:21:17.791 "trtype": "TCP" 00:21:17.791 } 00:21:17.791 ] 00:21:17.791 }, 00:21:17.791 { 00:21:17.791 "admin_qpairs": 0, 00:21:17.791 "completed_nvme_io": 0, 00:21:17.791 "current_admin_qpairs": 0, 00:21:17.791 "current_io_qpairs": 0, 00:21:17.791 "io_qpairs": 0, 00:21:17.791 "name": "nvmf_tgt_poll_group_1", 00:21:17.791 "pending_bdev_io": 0, 00:21:17.791 "transports": [ 00:21:17.791 { 00:21:17.791 "trtype": "TCP" 00:21:17.791 } 00:21:17.791 ] 00:21:17.791 }, 00:21:17.791 { 00:21:17.791 "admin_qpairs": 0, 00:21:17.791 "completed_nvme_io": 0, 00:21:17.791 "current_admin_qpairs": 0, 00:21:17.791 "current_io_qpairs": 0, 00:21:17.791 "io_qpairs": 0, 00:21:17.791 "name": "nvmf_tgt_poll_group_2", 00:21:17.791 "pending_bdev_io": 0, 00:21:17.791 "transports": [ 00:21:17.791 { 00:21:17.791 "trtype": "TCP" 00:21:17.791 } 00:21:17.791 ] 00:21:17.791 }, 00:21:17.791 { 00:21:17.791 "admin_qpairs": 0, 00:21:17.791 "completed_nvme_io": 0, 00:21:17.791 "current_admin_qpairs": 0, 00:21:17.791 "current_io_qpairs": 0, 00:21:17.791 "io_qpairs": 0, 00:21:17.791 "name": "nvmf_tgt_poll_group_3", 00:21:17.791 "pending_bdev_io": 0, 00:21:17.791 "transports": [ 00:21:17.791 { 00:21:17.791 "trtype": "TCP" 00:21:17.791 } 00:21:17.791 ] 00:21:17.791 } 00:21:17.791 ], 00:21:17.791 "tick_rate": 2290000000 00:21:17.791 }' 00:21:17.791 21:24:06 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:21:17.791 21:24:06 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:21:17.791 21:24:06 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:21:17.791 21:24:06 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:17.791 21:24:06 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:21:17.791 21:24:06 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:21:17.791 21:24:06 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:21:17.791 21:24:06 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:21:17.791 21:24:06 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:17.791 21:24:07 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:21:17.791 21:24:07 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:21:17.791 21:24:07 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:21:17.791 21:24:07 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:21:17.791 21:24:07 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:17.791 21:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.791 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:21:17.791 Malloc1 00:21:17.791 21:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.791 21:24:07 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:17.791 21:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.791 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:21:17.791 21:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.791 21:24:07 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:17.791 21:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.791 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:21:18.050 21:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.050 21:24:07 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:21:18.050 21:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.050 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:21:18.050 21:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.050 21:24:07 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.050 21:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.050 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:21:18.050 [2024-04-26 21:24:07.069562] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.050 21:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.050 21:24:07 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca -a 10.0.0.2 -s 4420 00:21:18.050 21:24:07 -- common/autotest_common.sh@638 -- # local es=0 00:21:18.050 21:24:07 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca -a 10.0.0.2 -s 4420 00:21:18.050 21:24:07 -- common/autotest_common.sh@626 -- # local arg=nvme 00:21:18.050 21:24:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:18.050 21:24:07 -- common/autotest_common.sh@630 -- # type -t nvme 00:21:18.050 21:24:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:18.050 21:24:07 -- common/autotest_common.sh@632 -- # type -P nvme 00:21:18.050 21:24:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:18.050 21:24:07 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:21:18.050 21:24:07 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:21:18.050 21:24:07 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca -a 10.0.0.2 -s 4420 00:21:18.050 [2024-04-26 21:24:07.095815] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca' 00:21:18.050 Failed to write to /dev/nvme-fabrics: Input/output error 00:21:18.051 could not add new controller: failed to write to nvme-fabrics device 00:21:18.051 21:24:07 -- common/autotest_common.sh@641 -- # es=1 00:21:18.051 21:24:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:18.051 21:24:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:18.051 21:24:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:18.051 21:24:07 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:21:18.051 21:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.051 21:24:07 -- common/autotest_common.sh@10 -- # set +x 00:21:18.051 21:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.051 21:24:07 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:18.051 21:24:07 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:21:18.051 21:24:07 -- common/autotest_common.sh@1184 -- # local i=0 00:21:18.051 21:24:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:18.051 21:24:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:18.051 21:24:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:20.582 21:24:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:20.582 21:24:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:20.582 21:24:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:20.582 21:24:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:20.582 21:24:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:20.582 21:24:09 -- common/autotest_common.sh@1194 -- # return 0 00:21:20.582 21:24:09 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:20.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:20.582 21:24:09 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:20.582 21:24:09 -- common/autotest_common.sh@1205 -- # local i=0 00:21:20.582 21:24:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:20.582 21:24:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:20.582 21:24:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:20.582 21:24:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:20.582 21:24:09 -- common/autotest_common.sh@1217 -- # return 0 00:21:20.582 21:24:09 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:21:20.582 21:24:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.582 21:24:09 -- common/autotest_common.sh@10 -- # set +x 00:21:20.582 21:24:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.582 21:24:09 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:20.582 21:24:09 -- common/autotest_common.sh@638 -- # local es=0 00:21:20.582 21:24:09 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:20.582 21:24:09 -- common/autotest_common.sh@626 -- # local arg=nvme 00:21:20.582 21:24:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.582 21:24:09 -- common/autotest_common.sh@630 -- # type -t nvme 00:21:20.582 21:24:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.582 21:24:09 -- common/autotest_common.sh@632 -- # type -P nvme 00:21:20.582 21:24:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.582 21:24:09 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:21:20.582 21:24:09 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:21:20.582 21:24:09 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:20.582 [2024-04-26 21:24:09.373162] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca' 00:21:20.582 Failed to write to /dev/nvme-fabrics: Input/output error 00:21:20.582 could not add new controller: failed to write to nvme-fabrics device 00:21:20.582 21:24:09 -- common/autotest_common.sh@641 -- # es=1 00:21:20.582 21:24:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:20.582 21:24:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:20.582 21:24:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:20.582 21:24:09 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:21:20.582 21:24:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.582 21:24:09 -- common/autotest_common.sh@10 -- # set +x 00:21:20.582 21:24:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.582 21:24:09 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:20.582 21:24:09 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:21:20.582 21:24:09 -- common/autotest_common.sh@1184 -- # local i=0 00:21:20.582 21:24:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:20.582 21:24:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:20.582 21:24:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:22.485 21:24:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:22.485 21:24:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:22.485 21:24:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:22.485 21:24:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:22.485 21:24:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:22.485 21:24:11 -- common/autotest_common.sh@1194 -- # return 0 00:21:22.485 21:24:11 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:22.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:22.485 21:24:11 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:22.485 21:24:11 -- common/autotest_common.sh@1205 -- # local i=0 00:21:22.485 21:24:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:22.485 21:24:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:22.485 21:24:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:22.485 21:24:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:22.485 21:24:11 -- common/autotest_common.sh@1217 -- # return 0 00:21:22.485 21:24:11 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.485 21:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.485 21:24:11 -- common/autotest_common.sh@10 -- # set +x 00:21:22.485 21:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.486 21:24:11 -- target/rpc.sh@81 -- # seq 1 5 00:21:22.486 21:24:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:22.486 21:24:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:22.486 21:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.486 21:24:11 -- common/autotest_common.sh@10 -- # set +x 00:21:22.486 21:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.486 21:24:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.486 21:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.486 21:24:11 -- common/autotest_common.sh@10 -- # set +x 00:21:22.486 [2024-04-26 21:24:11.641624] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.486 21:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.486 21:24:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:22.486 21:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.486 21:24:11 -- common/autotest_common.sh@10 -- # set +x 00:21:22.486 21:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.486 21:24:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:22.486 21:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.486 21:24:11 -- common/autotest_common.sh@10 -- # set +x 00:21:22.486 21:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.486 21:24:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:22.745 21:24:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:22.745 21:24:11 -- common/autotest_common.sh@1184 -- # local i=0 00:21:22.745 21:24:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:22.745 21:24:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:22.745 21:24:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:24.649 21:24:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:24.649 21:24:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:24.649 21:24:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:24.649 21:24:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:24.649 21:24:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:24.649 21:24:13 -- common/autotest_common.sh@1194 -- # return 0 00:21:24.649 21:24:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:24.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:24.649 21:24:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:24.649 21:24:13 -- common/autotest_common.sh@1205 -- # local i=0 00:21:24.649 21:24:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:24.649 21:24:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:24.908 21:24:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:24.908 21:24:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:24.908 21:24:13 -- common/autotest_common.sh@1217 -- # return 0 00:21:24.908 21:24:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:24.908 21:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.908 21:24:13 -- common/autotest_common.sh@10 -- # set +x 00:21:24.908 21:24:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.908 21:24:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.908 21:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.908 21:24:13 -- common/autotest_common.sh@10 -- # set +x 00:21:24.908 21:24:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.908 21:24:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:24.908 21:24:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:24.908 21:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.908 21:24:13 -- common/autotest_common.sh@10 -- # set +x 00:21:24.908 21:24:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.908 21:24:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.908 21:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.908 21:24:13 -- common/autotest_common.sh@10 -- # set +x 00:21:24.908 [2024-04-26 21:24:13.956839] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.908 21:24:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.908 21:24:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:24.908 21:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.908 21:24:13 -- common/autotest_common.sh@10 -- # set +x 00:21:24.908 21:24:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.908 21:24:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:24.908 21:24:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.908 21:24:13 -- common/autotest_common.sh@10 -- # set +x 00:21:24.908 21:24:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.908 21:24:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:24.908 21:24:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:24.908 21:24:14 -- common/autotest_common.sh@1184 -- # local i=0 00:21:24.908 21:24:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:24.908 21:24:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:24.908 21:24:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:27.439 21:24:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:27.439 21:24:16 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:27.439 21:24:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:27.439 21:24:16 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:27.439 21:24:16 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:27.440 21:24:16 -- common/autotest_common.sh@1194 -- # return 0 00:21:27.440 21:24:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:27.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:27.440 21:24:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:27.440 21:24:16 -- common/autotest_common.sh@1205 -- # local i=0 00:21:27.440 21:24:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:27.440 21:24:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:27.440 21:24:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:27.440 21:24:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:27.440 21:24:16 -- common/autotest_common.sh@1217 -- # return 0 00:21:27.440 21:24:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:27.440 21:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.440 21:24:16 -- common/autotest_common.sh@10 -- # set +x 00:21:27.440 21:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.440 21:24:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:27.440 21:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.440 21:24:16 -- common/autotest_common.sh@10 -- # set +x 00:21:27.440 21:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.440 21:24:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:27.440 21:24:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:27.440 21:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.440 21:24:16 -- common/autotest_common.sh@10 -- # set +x 00:21:27.440 21:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.440 21:24:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.440 21:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.440 21:24:16 -- common/autotest_common.sh@10 -- # set +x 00:21:27.440 [2024-04-26 21:24:16.265033] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.440 21:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.440 21:24:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:27.440 21:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.440 21:24:16 -- common/autotest_common.sh@10 -- # set +x 00:21:27.440 21:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.440 21:24:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:27.440 21:24:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.440 21:24:16 -- common/autotest_common.sh@10 -- # set +x 00:21:27.440 21:24:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.440 21:24:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:27.440 21:24:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:27.440 21:24:16 -- common/autotest_common.sh@1184 -- # local i=0 00:21:27.440 21:24:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:27.440 21:24:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:27.440 21:24:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:29.392 21:24:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:29.392 21:24:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:29.392 21:24:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:29.392 21:24:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:29.392 21:24:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:29.392 21:24:18 -- common/autotest_common.sh@1194 -- # return 0 00:21:29.392 21:24:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:29.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:29.392 21:24:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:29.392 21:24:18 -- common/autotest_common.sh@1205 -- # local i=0 00:21:29.392 21:24:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:29.392 21:24:18 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:29.392 21:24:18 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:29.392 21:24:18 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:29.651 21:24:18 -- common/autotest_common.sh@1217 -- # return 0 00:21:29.651 21:24:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:29.651 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.651 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.651 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.651 21:24:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.651 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.651 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.651 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.651 21:24:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:29.651 21:24:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:29.651 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.651 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.651 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.651 21:24:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.651 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.651 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.651 [2024-04-26 21:24:18.692029] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.651 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.651 21:24:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:29.651 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.651 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.651 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.651 21:24:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:29.651 21:24:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.651 21:24:18 -- common/autotest_common.sh@10 -- # set +x 00:21:29.651 21:24:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.651 21:24:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:29.651 21:24:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:29.651 21:24:18 -- common/autotest_common.sh@1184 -- # local i=0 00:21:29.651 21:24:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:29.651 21:24:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:29.651 21:24:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:32.187 21:24:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:32.187 21:24:20 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:32.187 21:24:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:32.187 21:24:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:32.187 21:24:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:32.187 21:24:20 -- common/autotest_common.sh@1194 -- # return 0 00:21:32.187 21:24:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:32.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:32.187 21:24:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:32.187 21:24:20 -- common/autotest_common.sh@1205 -- # local i=0 00:21:32.187 21:24:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:32.187 21:24:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:32.187 21:24:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:32.187 21:24:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:32.187 21:24:20 -- common/autotest_common.sh@1217 -- # return 0 00:21:32.187 21:24:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:32.187 21:24:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.187 21:24:20 -- common/autotest_common.sh@10 -- # set +x 00:21:32.187 21:24:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.187 21:24:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.187 21:24:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.187 21:24:20 -- common/autotest_common.sh@10 -- # set +x 00:21:32.187 21:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.187 21:24:21 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:21:32.187 21:24:21 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:32.187 21:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.187 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:21:32.187 21:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.187 21:24:21 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.187 21:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.187 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:21:32.187 [2024-04-26 21:24:21.015907] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.187 21:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.187 21:24:21 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:21:32.187 21:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.187 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:21:32.187 21:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.187 21:24:21 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:32.187 21:24:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.187 21:24:21 -- common/autotest_common.sh@10 -- # set +x 00:21:32.187 21:24:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.187 21:24:21 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:32.187 21:24:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:21:32.187 21:24:21 -- common/autotest_common.sh@1184 -- # local i=0 00:21:32.187 21:24:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:32.187 21:24:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:32.187 21:24:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:34.092 21:24:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:34.093 21:24:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:34.093 21:24:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:34.093 21:24:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:34.093 21:24:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:34.093 21:24:23 -- common/autotest_common.sh@1194 -- # return 0 00:21:34.093 21:24:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:34.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:34.093 21:24:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:34.093 21:24:23 -- common/autotest_common.sh@1205 -- # local i=0 00:21:34.093 21:24:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:34.093 21:24:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:34.093 21:24:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:34.093 21:24:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:34.093 21:24:23 -- common/autotest_common.sh@1217 -- # return 0 00:21:34.093 21:24:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:21:34.093 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.093 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.093 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.093 21:24:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.093 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.093 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.093 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.093 21:24:23 -- target/rpc.sh@99 -- # seq 1 5 00:21:34.093 21:24:23 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:34.093 21:24:23 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:34.093 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.093 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.093 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.093 21:24:23 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.093 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.093 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.093 [2024-04-26 21:24:23.323158] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.093 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.093 21:24:23 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:34.093 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.093 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.093 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.093 21:24:23 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:34.093 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.093 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.093 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.093 21:24:23 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:34.093 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.093 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:34.353 21:24:23 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 [2024-04-26 21:24:23.371107] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:34.353 21:24:23 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 [2024-04-26 21:24:23.419072] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:34.353 21:24:23 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 [2024-04-26 21:24:23.471021] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.353 21:24:23 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:21:34.353 21:24:23 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:21:34.353 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.353 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.353 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.354 21:24:23 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.354 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.354 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.354 [2024-04-26 21:24:23.542995] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.354 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.354 21:24:23 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:34.354 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.354 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.354 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.354 21:24:23 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:21:34.354 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.354 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.354 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.354 21:24:23 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:34.354 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.354 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.354 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.354 21:24:23 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.354 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.354 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.354 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.354 21:24:23 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:21:34.354 21:24:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.354 21:24:23 -- common/autotest_common.sh@10 -- # set +x 00:21:34.613 21:24:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.613 21:24:23 -- target/rpc.sh@110 -- # stats='{ 00:21:34.613 "poll_groups": [ 00:21:34.613 { 00:21:34.613 "admin_qpairs": 2, 00:21:34.613 "completed_nvme_io": 68, 00:21:34.613 "current_admin_qpairs": 0, 00:21:34.613 "current_io_qpairs": 0, 00:21:34.613 "io_qpairs": 16, 00:21:34.613 "name": "nvmf_tgt_poll_group_0", 00:21:34.613 "pending_bdev_io": 0, 00:21:34.613 "transports": [ 00:21:34.613 { 00:21:34.613 "trtype": "TCP" 00:21:34.613 } 00:21:34.613 ] 00:21:34.613 }, 00:21:34.613 { 00:21:34.613 "admin_qpairs": 3, 00:21:34.613 "completed_nvme_io": 67, 00:21:34.613 "current_admin_qpairs": 0, 00:21:34.613 "current_io_qpairs": 0, 00:21:34.613 "io_qpairs": 17, 00:21:34.613 "name": "nvmf_tgt_poll_group_1", 00:21:34.613 "pending_bdev_io": 0, 00:21:34.613 "transports": [ 00:21:34.613 { 00:21:34.613 "trtype": "TCP" 00:21:34.613 } 00:21:34.613 ] 00:21:34.613 }, 00:21:34.613 { 00:21:34.613 "admin_qpairs": 1, 00:21:34.613 "completed_nvme_io": 162, 00:21:34.613 "current_admin_qpairs": 0, 00:21:34.613 "current_io_qpairs": 0, 00:21:34.613 "io_qpairs": 19, 00:21:34.613 "name": "nvmf_tgt_poll_group_2", 00:21:34.613 "pending_bdev_io": 0, 00:21:34.613 "transports": [ 00:21:34.613 { 00:21:34.613 "trtype": "TCP" 00:21:34.613 } 00:21:34.613 ] 00:21:34.613 }, 00:21:34.613 { 00:21:34.613 "admin_qpairs": 1, 00:21:34.613 "completed_nvme_io": 123, 00:21:34.613 "current_admin_qpairs": 0, 00:21:34.613 "current_io_qpairs": 0, 00:21:34.613 "io_qpairs": 18, 00:21:34.613 "name": "nvmf_tgt_poll_group_3", 00:21:34.613 "pending_bdev_io": 0, 00:21:34.613 "transports": [ 00:21:34.613 { 00:21:34.613 "trtype": "TCP" 00:21:34.613 } 00:21:34.613 ] 00:21:34.613 } 00:21:34.613 ], 00:21:34.613 "tick_rate": 2290000000 00:21:34.613 }' 00:21:34.613 21:24:23 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:21:34.613 21:24:23 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:21:34.613 21:24:23 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:21:34.613 21:24:23 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:34.613 21:24:23 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:21:34.613 21:24:23 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:21:34.613 21:24:23 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:21:34.613 21:24:23 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:21:34.613 21:24:23 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:21:34.613 21:24:23 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:21:34.613 21:24:23 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:21:34.613 21:24:23 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:34.613 21:24:23 -- target/rpc.sh@123 -- # nvmftestfini 00:21:34.613 21:24:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:34.613 21:24:23 -- nvmf/common.sh@117 -- # sync 00:21:34.613 21:24:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.613 21:24:23 -- nvmf/common.sh@120 -- # set +e 00:21:34.613 21:24:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.613 21:24:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.613 rmmod nvme_tcp 00:21:34.613 rmmod nvme_fabrics 00:21:34.613 rmmod nvme_keyring 00:21:34.613 21:24:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.613 21:24:23 -- nvmf/common.sh@124 -- # set -e 00:21:34.613 21:24:23 -- nvmf/common.sh@125 -- # return 0 00:21:34.613 21:24:23 -- nvmf/common.sh@478 -- # '[' -n 83579 ']' 00:21:34.613 21:24:23 -- nvmf/common.sh@479 -- # killprocess 83579 00:21:34.613 21:24:23 -- common/autotest_common.sh@936 -- # '[' -z 83579 ']' 00:21:34.613 21:24:23 -- common/autotest_common.sh@940 -- # kill -0 83579 00:21:34.613 21:24:23 -- common/autotest_common.sh@941 -- # uname 00:21:34.613 21:24:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:34.613 21:24:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83579 00:21:34.873 killing process with pid 83579 00:21:34.873 21:24:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:34.873 21:24:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:34.873 21:24:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83579' 00:21:34.873 21:24:23 -- common/autotest_common.sh@955 -- # kill 83579 00:21:34.873 21:24:23 -- common/autotest_common.sh@960 -- # wait 83579 00:21:34.873 21:24:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:34.873 21:24:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:34.873 21:24:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:34.873 21:24:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:34.873 21:24:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:34.873 21:24:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.873 21:24:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.873 21:24:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.132 21:24:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:35.132 00:21:35.132 real 0m18.882s 00:21:35.132 user 1m12.216s 00:21:35.132 sys 0m1.752s 00:21:35.132 21:24:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:35.132 21:24:24 -- common/autotest_common.sh@10 -- # set +x 00:21:35.132 ************************************ 00:21:35.132 END TEST nvmf_rpc 00:21:35.132 ************************************ 00:21:35.132 21:24:24 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:21:35.132 21:24:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:35.132 21:24:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:35.132 21:24:24 -- common/autotest_common.sh@10 -- # set +x 00:21:35.132 ************************************ 00:21:35.132 START TEST nvmf_invalid 00:21:35.132 ************************************ 00:21:35.132 21:24:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:21:35.391 * Looking for test storage... 00:21:35.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:35.391 21:24:24 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:35.391 21:24:24 -- nvmf/common.sh@7 -- # uname -s 00:21:35.391 21:24:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.391 21:24:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.391 21:24:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.391 21:24:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.391 21:24:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.391 21:24:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.391 21:24:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.391 21:24:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.391 21:24:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.391 21:24:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.391 21:24:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:21:35.391 21:24:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:21:35.391 21:24:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.391 21:24:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.391 21:24:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:35.391 21:24:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.391 21:24:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:35.391 21:24:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.391 21:24:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.391 21:24:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.391 21:24:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.391 21:24:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.391 21:24:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.391 21:24:24 -- paths/export.sh@5 -- # export PATH 00:21:35.391 21:24:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.391 21:24:24 -- nvmf/common.sh@47 -- # : 0 00:21:35.391 21:24:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:35.391 21:24:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:35.391 21:24:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.391 21:24:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.391 21:24:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.391 21:24:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:35.391 21:24:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:35.391 21:24:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:35.391 21:24:24 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:21:35.391 21:24:24 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.391 21:24:24 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:35.391 21:24:24 -- target/invalid.sh@14 -- # target=foobar 00:21:35.391 21:24:24 -- target/invalid.sh@16 -- # RANDOM=0 00:21:35.392 21:24:24 -- target/invalid.sh@34 -- # nvmftestinit 00:21:35.392 21:24:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:35.392 21:24:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.392 21:24:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:35.392 21:24:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:35.392 21:24:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:35.392 21:24:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.392 21:24:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.392 21:24:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.392 21:24:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:35.392 21:24:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:35.392 21:24:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:35.392 21:24:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:35.392 21:24:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:35.392 21:24:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:35.392 21:24:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.392 21:24:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.392 21:24:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:35.392 21:24:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:35.392 21:24:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:35.392 21:24:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:35.392 21:24:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:35.392 21:24:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.392 21:24:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:35.392 21:24:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:35.392 21:24:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:35.392 21:24:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:35.392 21:24:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:35.392 21:24:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:35.392 Cannot find device "nvmf_tgt_br" 00:21:35.392 21:24:24 -- nvmf/common.sh@155 -- # true 00:21:35.392 21:24:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:35.392 Cannot find device "nvmf_tgt_br2" 00:21:35.392 21:24:24 -- nvmf/common.sh@156 -- # true 00:21:35.392 21:24:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:35.392 21:24:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:35.392 Cannot find device "nvmf_tgt_br" 00:21:35.392 21:24:24 -- nvmf/common.sh@158 -- # true 00:21:35.392 21:24:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:35.392 Cannot find device "nvmf_tgt_br2" 00:21:35.392 21:24:24 -- nvmf/common.sh@159 -- # true 00:21:35.392 21:24:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:35.392 21:24:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:35.392 21:24:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:35.392 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.392 21:24:24 -- nvmf/common.sh@162 -- # true 00:21:35.392 21:24:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:35.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:35.651 21:24:24 -- nvmf/common.sh@163 -- # true 00:21:35.651 21:24:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:35.651 21:24:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:35.651 21:24:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:35.651 21:24:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:35.651 21:24:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:35.651 21:24:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:35.651 21:24:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:35.651 21:24:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:35.651 21:24:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:35.651 21:24:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:35.651 21:24:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:35.651 21:24:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:35.651 21:24:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:35.651 21:24:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:35.651 21:24:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:35.651 21:24:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:35.651 21:24:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:35.651 21:24:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:35.651 21:24:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:35.651 21:24:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:35.651 21:24:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:35.651 21:24:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:35.651 21:24:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:35.651 21:24:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:35.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:21:35.651 00:21:35.651 --- 10.0.0.2 ping statistics --- 00:21:35.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.651 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:35.651 21:24:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:35.651 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:35.651 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:21:35.651 00:21:35.651 --- 10.0.0.3 ping statistics --- 00:21:35.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.652 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:21:35.652 21:24:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:35.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:21:35.652 00:21:35.652 --- 10.0.0.1 ping statistics --- 00:21:35.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.652 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:35.652 21:24:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.652 21:24:24 -- nvmf/common.sh@422 -- # return 0 00:21:35.652 21:24:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:35.652 21:24:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.652 21:24:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:35.652 21:24:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:35.652 21:24:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.652 21:24:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:35.652 21:24:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:35.912 21:24:24 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:21:35.912 21:24:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:35.912 21:24:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:35.912 21:24:24 -- common/autotest_common.sh@10 -- # set +x 00:21:35.912 21:24:24 -- nvmf/common.sh@470 -- # nvmfpid=84099 00:21:35.912 21:24:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:35.912 21:24:24 -- nvmf/common.sh@471 -- # waitforlisten 84099 00:21:35.912 21:24:24 -- common/autotest_common.sh@817 -- # '[' -z 84099 ']' 00:21:35.912 21:24:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.912 21:24:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:35.912 21:24:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.912 21:24:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:35.912 21:24:24 -- common/autotest_common.sh@10 -- # set +x 00:21:35.912 [2024-04-26 21:24:24.977094] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:35.912 [2024-04-26 21:24:24.977171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.912 [2024-04-26 21:24:25.118734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.171 [2024-04-26 21:24:25.173082] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.171 [2024-04-26 21:24:25.173228] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.171 [2024-04-26 21:24:25.173272] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.171 [2024-04-26 21:24:25.173304] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.171 [2024-04-26 21:24:25.173322] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.171 [2024-04-26 21:24:25.173599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.171 [2024-04-26 21:24:25.173689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.171 [2024-04-26 21:24:25.173809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.171 [2024-04-26 21:24:25.173809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.171 21:24:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:36.171 21:24:25 -- common/autotest_common.sh@850 -- # return 0 00:21:36.171 21:24:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:36.171 21:24:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:36.171 21:24:25 -- common/autotest_common.sh@10 -- # set +x 00:21:36.171 21:24:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.171 21:24:25 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:36.171 21:24:25 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11173 00:21:36.430 [2024-04-26 21:24:25.564409] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:21:36.430 21:24:25 -- target/invalid.sh@40 -- # out='2024/04/26 21:24:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11173 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:21:36.430 request: 00:21:36.430 { 00:21:36.430 "method": "nvmf_create_subsystem", 00:21:36.430 "params": { 00:21:36.430 "nqn": "nqn.2016-06.io.spdk:cnode11173", 00:21:36.430 "tgt_name": "foobar" 00:21:36.430 } 00:21:36.430 } 00:21:36.430 Got JSON-RPC error response 00:21:36.430 GoRPCClient: error on JSON-RPC call' 00:21:36.430 21:24:25 -- target/invalid.sh@41 -- # [[ 2024/04/26 21:24:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11173 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:21:36.430 request: 00:21:36.430 { 00:21:36.430 "method": "nvmf_create_subsystem", 00:21:36.430 "params": { 00:21:36.430 "nqn": "nqn.2016-06.io.spdk:cnode11173", 00:21:36.430 "tgt_name": "foobar" 00:21:36.430 } 00:21:36.430 } 00:21:36.430 Got JSON-RPC error response 00:21:36.430 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:21:36.430 21:24:25 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:21:36.430 21:24:25 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24494 00:21:36.689 [2024-04-26 21:24:25.812189] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24494: invalid serial number 'SPDKISFASTANDAWESOME' 00:21:36.689 21:24:25 -- target/invalid.sh@45 -- # out='2024/04/26 21:24:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24494 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:21:36.689 request: 00:21:36.689 { 00:21:36.689 "method": "nvmf_create_subsystem", 00:21:36.689 "params": { 00:21:36.689 "nqn": "nqn.2016-06.io.spdk:cnode24494", 00:21:36.689 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:21:36.689 } 00:21:36.689 } 00:21:36.689 Got JSON-RPC error response 00:21:36.689 GoRPCClient: error on JSON-RPC call' 00:21:36.689 21:24:25 -- target/invalid.sh@46 -- # [[ 2024/04/26 21:24:25 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24494 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:21:36.689 request: 00:21:36.689 { 00:21:36.689 "method": "nvmf_create_subsystem", 00:21:36.689 "params": { 00:21:36.689 "nqn": "nqn.2016-06.io.spdk:cnode24494", 00:21:36.689 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:21:36.689 } 00:21:36.689 } 00:21:36.689 Got JSON-RPC error response 00:21:36.689 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:21:36.689 21:24:25 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:21:36.689 21:24:25 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32640 00:21:36.948 [2024-04-26 21:24:26.055974] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32640: invalid model number 'SPDK_Controller' 00:21:36.948 21:24:26 -- target/invalid.sh@50 -- # out='2024/04/26 21:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode32640], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:21:36.948 request: 00:21:36.948 { 00:21:36.948 "method": "nvmf_create_subsystem", 00:21:36.948 "params": { 00:21:36.948 "nqn": "nqn.2016-06.io.spdk:cnode32640", 00:21:36.948 "model_number": "SPDK_Controller\u001f" 00:21:36.948 } 00:21:36.948 } 00:21:36.948 Got JSON-RPC error response 00:21:36.948 GoRPCClient: error on JSON-RPC call' 00:21:36.948 21:24:26 -- target/invalid.sh@51 -- # [[ 2024/04/26 21:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode32640], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:21:36.948 request: 00:21:36.948 { 00:21:36.948 "method": "nvmf_create_subsystem", 00:21:36.948 "params": { 00:21:36.948 "nqn": "nqn.2016-06.io.spdk:cnode32640", 00:21:36.948 "model_number": "SPDK_Controller\u001f" 00:21:36.948 } 00:21:36.948 } 00:21:36.948 Got JSON-RPC error response 00:21:36.948 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:21:36.948 21:24:26 -- target/invalid.sh@54 -- # gen_random_s 21 00:21:36.948 21:24:26 -- target/invalid.sh@19 -- # local length=21 ll 00:21:36.948 21:24:26 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:21:36.948 21:24:26 -- target/invalid.sh@21 -- # local chars 00:21:36.948 21:24:26 -- target/invalid.sh@22 -- # local string 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 127 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=$'\177' 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 74 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=J 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 67 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x43' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=C 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 40 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x28' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+='(' 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 47 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=/ 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 97 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x61' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=a 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 115 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x73' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=s 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 118 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x76' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=v 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 87 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x57' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=W 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 54 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x36' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=6 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 37 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x25' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=% 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 72 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x48' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=H 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 94 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+='^' 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 68 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x44' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=D 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 104 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x68' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=h 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 99 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x63' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=c 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 95 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=_ 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 88 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x58' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=X 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 91 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+='[' 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 80 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x50' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=P 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # printf %x 109 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:21:36.948 21:24:26 -- target/invalid.sh@25 -- # string+=m 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:36.948 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:36.948 21:24:26 -- target/invalid.sh@28 -- # [[  == \- ]] 00:21:36.948 21:24:26 -- target/invalid.sh@31 -- # echo 'JC(/asvW6%H^Dhc_X[Pm' 00:21:36.948 21:24:26 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'JC(/asvW6%H^Dhc_X[Pm' nqn.2016-06.io.spdk:cnode9505 00:21:37.208 [2024-04-26 21:24:26.415628] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9505: invalid serial number 'JC(/asvW6%H^Dhc_X[Pm' 00:21:37.208 21:24:26 -- target/invalid.sh@54 -- # out='2024/04/26 21:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9505 serial_number:JC(/asvW6%H^Dhc_X[Pm], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN JC(/asvW6%H^Dhc_X[Pm 00:21:37.208 request: 00:21:37.208 { 00:21:37.208 "method": "nvmf_create_subsystem", 00:21:37.208 "params": { 00:21:37.208 "nqn": "nqn.2016-06.io.spdk:cnode9505", 00:21:37.208 "serial_number": "\u007fJC(/asvW6%H^Dhc_X[Pm" 00:21:37.208 } 00:21:37.208 } 00:21:37.208 Got JSON-RPC error response 00:21:37.208 GoRPCClient: error on JSON-RPC call' 00:21:37.208 21:24:26 -- target/invalid.sh@55 -- # [[ 2024/04/26 21:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9505 serial_number:JC(/asvW6%H^Dhc_X[Pm], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN JC(/asvW6%H^Dhc_X[Pm 00:21:37.208 request: 00:21:37.208 { 00:21:37.208 "method": "nvmf_create_subsystem", 00:21:37.208 "params": { 00:21:37.208 "nqn": "nqn.2016-06.io.spdk:cnode9505", 00:21:37.208 "serial_number": "\u007fJC(/asvW6%H^Dhc_X[Pm" 00:21:37.208 } 00:21:37.208 } 00:21:37.208 Got JSON-RPC error response 00:21:37.208 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:21:37.208 21:24:26 -- target/invalid.sh@58 -- # gen_random_s 41 00:21:37.208 21:24:26 -- target/invalid.sh@19 -- # local length=41 ll 00:21:37.208 21:24:26 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:21:37.208 21:24:26 -- target/invalid.sh@21 -- # local chars 00:21:37.208 21:24:26 -- target/invalid.sh@22 -- # local string 00:21:37.208 21:24:26 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:21:37.208 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.208 21:24:26 -- target/invalid.sh@25 -- # printf %x 105 00:21:37.208 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x69' 00:21:37.208 21:24:26 -- target/invalid.sh@25 -- # string+=i 00:21:37.208 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.208 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.208 21:24:26 -- target/invalid.sh@25 -- # printf %x 84 00:21:37.208 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x54' 00:21:37.208 21:24:26 -- target/invalid.sh@25 -- # string+=T 00:21:37.208 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 35 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x23' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='#' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 35 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x23' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='#' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 55 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x37' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=7 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 62 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='>' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 119 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x77' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=w 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 87 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x57' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=W 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 38 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x26' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='&' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 40 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x28' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='(' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 114 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x72' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=r 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 57 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x39' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=9 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 73 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x49' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=I 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 67 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x43' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=C 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 45 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=- 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 81 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x51' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=Q 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 110 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=n 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 60 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='<' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 106 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=j 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 33 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x21' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='!' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 76 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=L 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 93 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=']' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 98 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x62' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=b 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 118 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x76' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=v 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 83 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x53' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=S 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 79 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=O 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 60 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='<' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 35 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x23' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='#' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 97 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x61' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=a 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 103 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x67' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=g 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 57 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x39' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=9 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 90 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=Z 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 46 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+=. 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # printf %x 36 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x24' 00:21:37.468 21:24:26 -- target/invalid.sh@25 -- # string+='$' 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.468 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # printf %x 127 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # string+=$'\177' 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # printf %x 114 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x72' 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # string+=r 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # printf %x 74 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # string+=J 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # printf %x 122 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # string+=z 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # printf %x 90 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # string+=Z 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # printf %x 32 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x20' 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # string+=' ' 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # printf %x 40 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # echo -e '\x28' 00:21:37.469 21:24:26 -- target/invalid.sh@25 -- # string+='(' 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll++ )) 00:21:37.469 21:24:26 -- target/invalid.sh@24 -- # (( ll < length )) 00:21:37.469 21:24:26 -- target/invalid.sh@28 -- # [[ i == \- ]] 00:21:37.469 21:24:26 -- target/invalid.sh@31 -- # echo 'iT##7>wW&(r9IC-QnwW&(r9IC-QnwW&(r9IC-QnwW&(r9IC-QnwW&(r9IC-QnwW&(r9IC-Qn /dev/null' 00:21:40.317 21:24:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.317 21:24:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:40.317 ************************************ 00:21:40.317 END TEST nvmf_invalid 00:21:40.317 ************************************ 00:21:40.317 00:21:40.317 real 0m5.139s 00:21:40.317 user 0m19.847s 00:21:40.317 sys 0m1.328s 00:21:40.317 21:24:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:40.317 21:24:29 -- common/autotest_common.sh@10 -- # set +x 00:21:40.317 21:24:29 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:21:40.317 21:24:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:40.317 21:24:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:40.317 21:24:29 -- common/autotest_common.sh@10 -- # set +x 00:21:40.577 ************************************ 00:21:40.577 START TEST nvmf_abort 00:21:40.577 ************************************ 00:21:40.577 21:24:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:21:40.577 * Looking for test storage... 00:21:40.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:40.577 21:24:29 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.577 21:24:29 -- nvmf/common.sh@7 -- # uname -s 00:21:40.577 21:24:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.577 21:24:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.577 21:24:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.577 21:24:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.577 21:24:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.577 21:24:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.577 21:24:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.577 21:24:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.577 21:24:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.577 21:24:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.577 21:24:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:21:40.577 21:24:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:21:40.577 21:24:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.577 21:24:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.577 21:24:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.577 21:24:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.577 21:24:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.577 21:24:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.577 21:24:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.577 21:24:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.577 21:24:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.577 21:24:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.577 21:24:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.577 21:24:29 -- paths/export.sh@5 -- # export PATH 00:21:40.577 21:24:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.577 21:24:29 -- nvmf/common.sh@47 -- # : 0 00:21:40.577 21:24:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.577 21:24:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.577 21:24:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.577 21:24:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.577 21:24:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.577 21:24:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.577 21:24:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.577 21:24:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.577 21:24:29 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.577 21:24:29 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:21:40.577 21:24:29 -- target/abort.sh@14 -- # nvmftestinit 00:21:40.577 21:24:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:40.577 21:24:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.577 21:24:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:40.577 21:24:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:40.577 21:24:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:40.577 21:24:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.577 21:24:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.577 21:24:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.577 21:24:29 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:40.577 21:24:29 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:40.577 21:24:29 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:40.577 21:24:29 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:40.577 21:24:29 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:40.577 21:24:29 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:40.577 21:24:29 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.577 21:24:29 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.577 21:24:29 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:40.577 21:24:29 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:40.577 21:24:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:40.577 21:24:29 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:40.577 21:24:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:40.577 21:24:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.577 21:24:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:40.577 21:24:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:40.577 21:24:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:40.577 21:24:29 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:40.577 21:24:29 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:40.577 21:24:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:40.577 Cannot find device "nvmf_tgt_br" 00:21:40.577 21:24:29 -- nvmf/common.sh@155 -- # true 00:21:40.577 21:24:29 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:40.577 Cannot find device "nvmf_tgt_br2" 00:21:40.577 21:24:29 -- nvmf/common.sh@156 -- # true 00:21:40.577 21:24:29 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:40.577 21:24:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:40.577 Cannot find device "nvmf_tgt_br" 00:21:40.577 21:24:29 -- nvmf/common.sh@158 -- # true 00:21:40.577 21:24:29 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:40.577 Cannot find device "nvmf_tgt_br2" 00:21:40.577 21:24:29 -- nvmf/common.sh@159 -- # true 00:21:40.577 21:24:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:40.577 21:24:29 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:40.836 21:24:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:40.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.836 21:24:29 -- nvmf/common.sh@162 -- # true 00:21:40.836 21:24:29 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:40.836 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.836 21:24:29 -- nvmf/common.sh@163 -- # true 00:21:40.836 21:24:29 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:40.836 21:24:29 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:40.836 21:24:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:40.836 21:24:29 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:40.836 21:24:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:40.836 21:24:29 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:40.836 21:24:29 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:40.836 21:24:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:40.836 21:24:29 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:40.836 21:24:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:40.836 21:24:29 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:40.836 21:24:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:40.836 21:24:29 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:40.836 21:24:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:40.836 21:24:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:40.836 21:24:29 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:40.836 21:24:29 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:40.836 21:24:29 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:40.836 21:24:29 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:40.836 21:24:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:40.836 21:24:29 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:40.836 21:24:29 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:40.836 21:24:29 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:40.836 21:24:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:40.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:21:40.836 00:21:40.836 --- 10.0.0.2 ping statistics --- 00:21:40.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.836 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:40.836 21:24:29 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:40.836 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:40.836 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:21:40.836 00:21:40.836 --- 10.0.0.3 ping statistics --- 00:21:40.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.836 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:40.836 21:24:29 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:40.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:21:40.836 00:21:40.836 --- 10.0.0.1 ping statistics --- 00:21:40.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.836 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:40.836 21:24:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.836 21:24:30 -- nvmf/common.sh@422 -- # return 0 00:21:40.836 21:24:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:40.836 21:24:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.836 21:24:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:40.836 21:24:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:40.836 21:24:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.836 21:24:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:40.836 21:24:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:40.836 21:24:30 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:21:40.836 21:24:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:40.836 21:24:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:40.836 21:24:30 -- common/autotest_common.sh@10 -- # set +x 00:21:40.836 21:24:30 -- nvmf/common.sh@470 -- # nvmfpid=84590 00:21:40.836 21:24:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:40.836 21:24:30 -- nvmf/common.sh@471 -- # waitforlisten 84590 00:21:40.836 21:24:30 -- common/autotest_common.sh@817 -- # '[' -z 84590 ']' 00:21:40.836 21:24:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.836 21:24:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:40.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.836 21:24:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.836 21:24:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:40.836 21:24:30 -- common/autotest_common.sh@10 -- # set +x 00:21:41.094 [2024-04-26 21:24:30.093195] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:41.094 [2024-04-26 21:24:30.093278] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.094 [2024-04-26 21:24:30.236702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.094 [2024-04-26 21:24:30.293009] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.094 [2024-04-26 21:24:30.293148] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.094 [2024-04-26 21:24:30.293197] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.094 [2024-04-26 21:24:30.293241] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.094 [2024-04-26 21:24:30.293260] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.094 [2024-04-26 21:24:30.293455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.094 [2024-04-26 21:24:30.293570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.094 [2024-04-26 21:24:30.293572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.051 21:24:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:42.051 21:24:31 -- common/autotest_common.sh@850 -- # return 0 00:21:42.051 21:24:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:42.051 21:24:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:42.051 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:42.051 21:24:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.051 21:24:31 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:21:42.051 21:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.051 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:42.051 [2024-04-26 21:24:31.112411] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.051 21:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.051 21:24:31 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:21:42.051 21:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.051 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:42.051 Malloc0 00:21:42.051 21:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.051 21:24:31 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:42.051 21:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.051 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:42.051 Delay0 00:21:42.051 21:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.051 21:24:31 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:42.051 21:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.051 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:42.051 21:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.051 21:24:31 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:21:42.051 21:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.051 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:42.051 21:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.051 21:24:31 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:42.051 21:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.051 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:42.051 [2024-04-26 21:24:31.182099] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.051 21:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.051 21:24:31 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:42.051 21:24:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.051 21:24:31 -- common/autotest_common.sh@10 -- # set +x 00:21:42.051 21:24:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.051 21:24:31 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:21:42.309 [2024-04-26 21:24:31.357503] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:21:44.204 Initializing NVMe Controllers 00:21:44.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:21:44.204 controller IO queue size 128 less than required 00:21:44.204 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:21:44.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:21:44.204 Initialization complete. Launching workers. 00:21:44.204 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 39012 00:21:44.204 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39073, failed to submit 62 00:21:44.204 success 39016, unsuccess 57, failed 0 00:21:44.204 21:24:33 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:44.204 21:24:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.204 21:24:33 -- common/autotest_common.sh@10 -- # set +x 00:21:44.204 21:24:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:44.204 21:24:33 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:44.204 21:24:33 -- target/abort.sh@38 -- # nvmftestfini 00:21:44.204 21:24:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:44.204 21:24:33 -- nvmf/common.sh@117 -- # sync 00:21:44.204 21:24:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.204 21:24:33 -- nvmf/common.sh@120 -- # set +e 00:21:44.204 21:24:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.204 21:24:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.204 rmmod nvme_tcp 00:21:44.204 rmmod nvme_fabrics 00:21:44.204 rmmod nvme_keyring 00:21:44.204 21:24:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.462 21:24:33 -- nvmf/common.sh@124 -- # set -e 00:21:44.462 21:24:33 -- nvmf/common.sh@125 -- # return 0 00:21:44.462 21:24:33 -- nvmf/common.sh@478 -- # '[' -n 84590 ']' 00:21:44.462 21:24:33 -- nvmf/common.sh@479 -- # killprocess 84590 00:21:44.462 21:24:33 -- common/autotest_common.sh@936 -- # '[' -z 84590 ']' 00:21:44.462 21:24:33 -- common/autotest_common.sh@940 -- # kill -0 84590 00:21:44.462 21:24:33 -- common/autotest_common.sh@941 -- # uname 00:21:44.462 21:24:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:44.462 21:24:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84590 00:21:44.462 killing process with pid 84590 00:21:44.462 21:24:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:44.462 21:24:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:44.462 21:24:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84590' 00:21:44.462 21:24:33 -- common/autotest_common.sh@955 -- # kill 84590 00:21:44.462 21:24:33 -- common/autotest_common.sh@960 -- # wait 84590 00:21:44.462 21:24:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:44.462 21:24:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:44.462 21:24:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:44.462 21:24:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.462 21:24:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.462 21:24:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.462 21:24:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.462 21:24:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.721 21:24:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:44.721 00:21:44.721 real 0m4.154s 00:21:44.721 user 0m12.149s 00:21:44.721 sys 0m0.931s 00:21:44.721 21:24:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:44.721 ************************************ 00:21:44.721 END TEST nvmf_abort 00:21:44.721 ************************************ 00:21:44.721 21:24:33 -- common/autotest_common.sh@10 -- # set +x 00:21:44.721 21:24:33 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:21:44.721 21:24:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:44.721 21:24:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:44.721 21:24:33 -- common/autotest_common.sh@10 -- # set +x 00:21:44.721 ************************************ 00:21:44.721 START TEST nvmf_ns_hotplug_stress 00:21:44.721 ************************************ 00:21:44.721 21:24:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:21:44.980 * Looking for test storage... 00:21:44.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:44.980 21:24:33 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.980 21:24:33 -- nvmf/common.sh@7 -- # uname -s 00:21:44.980 21:24:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.980 21:24:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.980 21:24:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.980 21:24:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.980 21:24:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.980 21:24:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.980 21:24:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.980 21:24:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.980 21:24:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.980 21:24:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.980 21:24:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:21:44.980 21:24:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:21:44.981 21:24:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.981 21:24:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.981 21:24:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.981 21:24:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.981 21:24:34 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.981 21:24:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.981 21:24:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.981 21:24:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.981 21:24:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.981 21:24:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.981 21:24:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.981 21:24:34 -- paths/export.sh@5 -- # export PATH 00:21:44.981 21:24:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.981 21:24:34 -- nvmf/common.sh@47 -- # : 0 00:21:44.981 21:24:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:44.981 21:24:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:44.981 21:24:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.981 21:24:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.981 21:24:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.981 21:24:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:44.981 21:24:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:44.981 21:24:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:44.981 21:24:34 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.981 21:24:34 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:21:44.981 21:24:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:44.981 21:24:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.981 21:24:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:44.981 21:24:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:44.981 21:24:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:44.981 21:24:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.981 21:24:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.981 21:24:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.981 21:24:34 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:44.981 21:24:34 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:44.981 21:24:34 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:44.981 21:24:34 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:44.981 21:24:34 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:44.981 21:24:34 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:44.981 21:24:34 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.981 21:24:34 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.981 21:24:34 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:44.981 21:24:34 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:44.981 21:24:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:44.981 21:24:34 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:44.981 21:24:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:44.981 21:24:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.981 21:24:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:44.981 21:24:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:44.981 21:24:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:44.981 21:24:34 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:44.981 21:24:34 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:44.981 21:24:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:44.981 Cannot find device "nvmf_tgt_br" 00:21:44.981 21:24:34 -- nvmf/common.sh@155 -- # true 00:21:44.981 21:24:34 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.981 Cannot find device "nvmf_tgt_br2" 00:21:44.981 21:24:34 -- nvmf/common.sh@156 -- # true 00:21:44.981 21:24:34 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:44.981 21:24:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:44.981 Cannot find device "nvmf_tgt_br" 00:21:44.981 21:24:34 -- nvmf/common.sh@158 -- # true 00:21:44.981 21:24:34 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:44.981 Cannot find device "nvmf_tgt_br2" 00:21:44.981 21:24:34 -- nvmf/common.sh@159 -- # true 00:21:44.981 21:24:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:44.981 21:24:34 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:44.981 21:24:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.981 21:24:34 -- nvmf/common.sh@162 -- # true 00:21:44.981 21:24:34 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.981 21:24:34 -- nvmf/common.sh@163 -- # true 00:21:44.981 21:24:34 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:44.981 21:24:34 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:45.242 21:24:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:45.242 21:24:34 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:45.242 21:24:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:45.242 21:24:34 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:45.242 21:24:34 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:45.242 21:24:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:45.242 21:24:34 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:45.242 21:24:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:45.242 21:24:34 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:45.242 21:24:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:45.242 21:24:34 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:45.242 21:24:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:45.242 21:24:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:45.242 21:24:34 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:45.242 21:24:34 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:45.242 21:24:34 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:45.242 21:24:34 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:45.242 21:24:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:45.242 21:24:34 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:45.242 21:24:34 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:45.242 21:24:34 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:45.242 21:24:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:45.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:21:45.242 00:21:45.242 --- 10.0.0.2 ping statistics --- 00:21:45.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.242 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:45.242 21:24:34 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:45.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:45.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.138 ms 00:21:45.242 00:21:45.242 --- 10.0.0.3 ping statistics --- 00:21:45.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.242 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:21:45.242 21:24:34 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:45.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:21:45.242 00:21:45.242 --- 10.0.0.1 ping statistics --- 00:21:45.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.242 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:45.242 21:24:34 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.242 21:24:34 -- nvmf/common.sh@422 -- # return 0 00:21:45.242 21:24:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:45.242 21:24:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.242 21:24:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:45.242 21:24:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:45.242 21:24:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.242 21:24:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:45.242 21:24:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:45.242 21:24:34 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:21:45.242 21:24:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:45.242 21:24:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:45.242 21:24:34 -- common/autotest_common.sh@10 -- # set +x 00:21:45.242 21:24:34 -- nvmf/common.sh@470 -- # nvmfpid=84872 00:21:45.242 21:24:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:45.242 21:24:34 -- nvmf/common.sh@471 -- # waitforlisten 84872 00:21:45.242 21:24:34 -- common/autotest_common.sh@817 -- # '[' -z 84872 ']' 00:21:45.242 21:24:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.242 21:24:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:45.242 21:24:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.242 21:24:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:45.242 21:24:34 -- common/autotest_common.sh@10 -- # set +x 00:21:45.502 [2024-04-26 21:24:34.500648] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:45.502 [2024-04-26 21:24:34.500728] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.502 [2024-04-26 21:24:34.640253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:45.502 [2024-04-26 21:24:34.694173] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.502 [2024-04-26 21:24:34.694337] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.502 [2024-04-26 21:24:34.694388] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.502 [2024-04-26 21:24:34.694463] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.502 [2024-04-26 21:24:34.694492] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.502 [2024-04-26 21:24:34.694593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.502 [2024-04-26 21:24:34.694711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.502 [2024-04-26 21:24:34.694712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.438 21:24:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:46.438 21:24:35 -- common/autotest_common.sh@850 -- # return 0 00:21:46.438 21:24:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:46.438 21:24:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:46.438 21:24:35 -- common/autotest_common.sh@10 -- # set +x 00:21:46.438 21:24:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.438 21:24:35 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:21:46.438 21:24:35 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:46.697 [2024-04-26 21:24:35.700192] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.697 21:24:35 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:46.955 21:24:35 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.955 [2024-04-26 21:24:36.205102] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.213 21:24:36 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:47.213 21:24:36 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:21:47.472 Malloc0 00:21:47.472 21:24:36 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:47.730 Delay0 00:21:47.730 21:24:36 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:47.994 21:24:37 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:21:48.252 NULL1 00:21:48.252 21:24:37 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:21:48.510 21:24:37 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=85003 00:21:48.510 21:24:37 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:21:48.510 21:24:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:48.510 21:24:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:49.884 Read completed with error (sct=0, sc=11) 00:21:49.884 21:24:38 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:49.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:49.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:49.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:49.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:49.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:50.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:50.142 21:24:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:21:50.142 21:24:39 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:21:50.399 true 00:21:50.399 21:24:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:50.399 21:24:39 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:50.966 21:24:40 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:51.223 21:24:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:21:51.223 21:24:40 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:21:51.481 true 00:21:51.481 21:24:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:51.481 21:24:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:51.835 21:24:40 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:52.093 21:24:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:21:52.093 21:24:41 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:21:52.350 true 00:21:52.350 21:24:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:52.350 21:24:41 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:53.284 21:24:42 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:53.284 21:24:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:21:53.284 21:24:42 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:21:53.542 true 00:21:53.542 21:24:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:53.542 21:24:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:53.800 21:24:42 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:54.057 21:24:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:21:54.057 21:24:43 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:21:54.314 true 00:21:54.314 21:24:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:54.314 21:24:43 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:55.248 21:24:44 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:55.506 21:24:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:21:55.506 21:24:44 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:21:55.506 true 00:21:55.506 21:24:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:55.506 21:24:44 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:55.765 21:24:44 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:56.024 21:24:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:21:56.024 21:24:45 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:21:56.281 true 00:21:56.281 21:24:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:56.282 21:24:45 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:57.216 21:24:46 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:57.475 21:24:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:21:57.475 21:24:46 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:21:57.734 true 00:21:57.734 21:24:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:57.734 21:24:46 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:57.992 21:24:47 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:58.251 21:24:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:21:58.251 21:24:47 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:21:58.510 true 00:21:58.510 21:24:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:58.510 21:24:47 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:58.769 21:24:47 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:59.030 21:24:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:21:59.030 21:24:48 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:21:59.290 true 00:21:59.290 21:24:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:21:59.290 21:24:48 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:00.223 21:24:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:00.481 21:24:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:22:00.481 21:24:49 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:22:00.481 true 00:22:00.738 21:24:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:00.738 21:24:49 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:00.738 21:24:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:01.043 21:24:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:22:01.043 21:24:50 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:22:01.302 true 00:22:01.302 21:24:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:01.302 21:24:50 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:02.234 21:24:51 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:02.490 21:24:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:22:02.490 21:24:51 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:22:02.749 true 00:22:02.749 21:24:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:02.749 21:24:51 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:03.007 21:24:52 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:03.007 21:24:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:22:03.007 21:24:52 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:22:03.265 true 00:22:03.265 21:24:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:03.265 21:24:52 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:04.212 21:24:53 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:04.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:04.470 21:24:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:22:04.470 21:24:53 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:22:04.728 true 00:22:04.728 21:24:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:04.728 21:24:53 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:04.987 21:24:53 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:04.987 21:24:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:22:04.987 21:24:54 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:22:05.245 true 00:22:05.245 21:24:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:05.245 21:24:54 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:06.179 21:24:55 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:06.499 21:24:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:22:06.499 21:24:55 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:22:06.757 true 00:22:06.757 21:24:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:06.757 21:24:55 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:06.757 21:24:55 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:07.015 21:24:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:22:07.015 21:24:56 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:22:07.273 true 00:22:07.273 21:24:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:07.273 21:24:56 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:08.208 21:24:57 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:08.465 21:24:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:22:08.465 21:24:57 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:22:08.722 true 00:22:08.722 21:24:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:08.722 21:24:57 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:08.980 21:24:58 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:09.238 21:24:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:22:09.238 21:24:58 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:22:09.535 true 00:22:09.535 21:24:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:09.535 21:24:58 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:09.831 21:24:58 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:09.831 21:24:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:22:09.831 21:24:59 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:22:10.090 true 00:22:10.090 21:24:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:10.090 21:24:59 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:11.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:11.465 21:25:00 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:11.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:11.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:11.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:11.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:11.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:22:11.465 21:25:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:22:11.465 21:25:00 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:22:11.723 true 00:22:11.723 21:25:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:11.723 21:25:00 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:12.656 21:25:01 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:12.656 21:25:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:22:12.656 21:25:01 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:22:12.916 true 00:22:12.916 21:25:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:12.916 21:25:02 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:13.175 21:25:02 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:13.433 21:25:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:22:13.433 21:25:02 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:22:13.771 true 00:22:13.771 21:25:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:13.771 21:25:02 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:14.703 21:25:03 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:14.703 21:25:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:22:14.703 21:25:03 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:22:14.963 true 00:22:14.963 21:25:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:14.963 21:25:04 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:15.220 21:25:04 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:15.478 21:25:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:22:15.478 21:25:04 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:22:15.735 true 00:22:15.735 21:25:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:15.735 21:25:04 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:15.992 21:25:05 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:16.554 21:25:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:22:16.554 21:25:05 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:22:16.554 true 00:22:16.810 21:25:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:16.810 21:25:05 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:17.747 21:25:06 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:17.747 21:25:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:22:17.747 21:25:06 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:22:18.011 true 00:22:18.011 21:25:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:18.011 21:25:07 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:18.269 21:25:07 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:18.528 21:25:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:22:18.528 21:25:07 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:22:18.528 true 00:22:18.788 21:25:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:18.788 21:25:07 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:19.726 Initializing NVMe Controllers 00:22:19.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.726 Controller IO queue size 128, less than required. 00:22:19.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.726 Controller IO queue size 128, less than required. 00:22:19.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:19.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:19.726 Initialization complete. Launching workers. 00:22:19.726 ======================================================== 00:22:19.726 Latency(us) 00:22:19.726 Device Information : IOPS MiB/s Average min max 00:22:19.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 507.43 0.25 138013.06 3957.74 1158747.73 00:22:19.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10470.64 5.11 12224.59 3348.49 623872.50 00:22:19.726 ======================================================== 00:22:19.726 Total : 10978.07 5.36 18038.79 3348.49 1158747.73 00:22:19.726 00:22:19.726 21:25:08 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:19.726 21:25:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:22:19.726 21:25:08 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:22:19.986 true 00:22:19.986 21:25:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 85003 00:22:19.986 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (85003) - No such process 00:22:19.986 21:25:09 -- target/ns_hotplug_stress.sh@44 -- # wait 85003 00:22:19.986 21:25:09 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:22:19.986 21:25:09 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:22:19.986 21:25:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:19.986 21:25:09 -- nvmf/common.sh@117 -- # sync 00:22:19.986 21:25:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.986 21:25:09 -- nvmf/common.sh@120 -- # set +e 00:22:19.986 21:25:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.986 21:25:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.986 rmmod nvme_tcp 00:22:19.986 rmmod nvme_fabrics 00:22:19.986 rmmod nvme_keyring 00:22:19.986 21:25:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:19.986 21:25:09 -- nvmf/common.sh@124 -- # set -e 00:22:19.986 21:25:09 -- nvmf/common.sh@125 -- # return 0 00:22:19.986 21:25:09 -- nvmf/common.sh@478 -- # '[' -n 84872 ']' 00:22:19.986 21:25:09 -- nvmf/common.sh@479 -- # killprocess 84872 00:22:19.986 21:25:09 -- common/autotest_common.sh@936 -- # '[' -z 84872 ']' 00:22:19.986 21:25:09 -- common/autotest_common.sh@940 -- # kill -0 84872 00:22:19.986 21:25:09 -- common/autotest_common.sh@941 -- # uname 00:22:19.986 21:25:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:19.986 21:25:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84872 00:22:20.245 21:25:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:20.245 21:25:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:20.245 killing process with pid 84872 00:22:20.245 21:25:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84872' 00:22:20.245 21:25:09 -- common/autotest_common.sh@955 -- # kill 84872 00:22:20.245 21:25:09 -- common/autotest_common.sh@960 -- # wait 84872 00:22:20.245 21:25:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:20.245 21:25:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:20.245 21:25:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:20.245 21:25:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.245 21:25:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.245 21:25:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.245 21:25:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.245 21:25:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.503 21:25:09 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:20.503 00:22:20.503 real 0m35.652s 00:22:20.503 user 2m32.587s 00:22:20.503 sys 0m6.971s 00:22:20.503 21:25:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:20.503 21:25:09 -- common/autotest_common.sh@10 -- # set +x 00:22:20.503 ************************************ 00:22:20.503 END TEST nvmf_ns_hotplug_stress 00:22:20.503 ************************************ 00:22:20.503 21:25:09 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:22:20.503 21:25:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:20.503 21:25:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:20.503 21:25:09 -- common/autotest_common.sh@10 -- # set +x 00:22:20.503 ************************************ 00:22:20.503 START TEST nvmf_connect_stress 00:22:20.503 ************************************ 00:22:20.503 21:25:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:22:20.763 * Looking for test storage... 00:22:20.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:20.763 21:25:09 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:20.763 21:25:09 -- nvmf/common.sh@7 -- # uname -s 00:22:20.763 21:25:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.763 21:25:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.763 21:25:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.763 21:25:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.763 21:25:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.763 21:25:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.763 21:25:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.763 21:25:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.763 21:25:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.763 21:25:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.763 21:25:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:22:20.763 21:25:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:22:20.763 21:25:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.763 21:25:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.763 21:25:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:20.763 21:25:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.763 21:25:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:20.763 21:25:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.763 21:25:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.763 21:25:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.763 21:25:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.763 21:25:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.763 21:25:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.763 21:25:09 -- paths/export.sh@5 -- # export PATH 00:22:20.764 21:25:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.764 21:25:09 -- nvmf/common.sh@47 -- # : 0 00:22:20.764 21:25:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.764 21:25:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.764 21:25:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.764 21:25:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.764 21:25:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.764 21:25:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.764 21:25:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.764 21:25:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.764 21:25:09 -- target/connect_stress.sh@12 -- # nvmftestinit 00:22:20.764 21:25:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:20.764 21:25:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.764 21:25:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:20.764 21:25:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:20.764 21:25:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:20.764 21:25:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.764 21:25:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.764 21:25:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.764 21:25:09 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:20.764 21:25:09 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:20.764 21:25:09 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:20.764 21:25:09 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:20.764 21:25:09 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:20.764 21:25:09 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:20.764 21:25:09 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.764 21:25:09 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.764 21:25:09 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:20.764 21:25:09 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:20.764 21:25:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:20.764 21:25:09 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:20.764 21:25:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:20.764 21:25:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.764 21:25:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:20.764 21:25:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:20.764 21:25:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:20.764 21:25:09 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:20.764 21:25:09 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:20.764 21:25:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:20.764 Cannot find device "nvmf_tgt_br" 00:22:20.764 21:25:09 -- nvmf/common.sh@155 -- # true 00:22:20.764 21:25:09 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:20.764 Cannot find device "nvmf_tgt_br2" 00:22:20.764 21:25:09 -- nvmf/common.sh@156 -- # true 00:22:20.764 21:25:09 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:20.764 21:25:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:20.764 Cannot find device "nvmf_tgt_br" 00:22:20.764 21:25:09 -- nvmf/common.sh@158 -- # true 00:22:20.764 21:25:09 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:20.764 Cannot find device "nvmf_tgt_br2" 00:22:20.764 21:25:09 -- nvmf/common.sh@159 -- # true 00:22:20.764 21:25:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:20.764 21:25:09 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:20.764 21:25:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:20.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:20.764 21:25:09 -- nvmf/common.sh@162 -- # true 00:22:20.764 21:25:09 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:20.764 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:20.764 21:25:09 -- nvmf/common.sh@163 -- # true 00:22:20.764 21:25:09 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:20.764 21:25:09 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:20.764 21:25:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:20.764 21:25:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:20.764 21:25:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:21.023 21:25:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:21.023 21:25:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:21.023 21:25:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:21.023 21:25:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:21.024 21:25:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:21.024 21:25:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:21.024 21:25:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:21.024 21:25:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:21.024 21:25:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:21.024 21:25:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:21.024 21:25:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:21.024 21:25:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:21.024 21:25:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:21.024 21:25:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:21.024 21:25:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:21.024 21:25:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:21.024 21:25:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:21.024 21:25:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:21.024 21:25:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:21.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:22:21.024 00:22:21.024 --- 10.0.0.2 ping statistics --- 00:22:21.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.024 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:22:21.024 21:25:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:21.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:21.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:22:21.024 00:22:21.024 --- 10.0.0.3 ping statistics --- 00:22:21.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.024 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:22:21.024 21:25:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:21.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:22:21.024 00:22:21.024 --- 10.0.0.1 ping statistics --- 00:22:21.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.024 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:21.024 21:25:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.024 21:25:10 -- nvmf/common.sh@422 -- # return 0 00:22:21.024 21:25:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:21.024 21:25:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.024 21:25:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:21.024 21:25:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:21.024 21:25:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.024 21:25:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:21.024 21:25:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:21.024 21:25:10 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:22:21.024 21:25:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:21.024 21:25:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:21.024 21:25:10 -- common/autotest_common.sh@10 -- # set +x 00:22:21.024 21:25:10 -- nvmf/common.sh@470 -- # nvmfpid=86162 00:22:21.024 21:25:10 -- nvmf/common.sh@471 -- # waitforlisten 86162 00:22:21.024 21:25:10 -- common/autotest_common.sh@817 -- # '[' -z 86162 ']' 00:22:21.024 21:25:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.024 21:25:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:21.024 21:25:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:21.024 21:25:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.024 21:25:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:21.024 21:25:10 -- common/autotest_common.sh@10 -- # set +x 00:22:21.024 [2024-04-26 21:25:10.246109] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:21.024 [2024-04-26 21:25:10.246206] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.283 [2024-04-26 21:25:10.387665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:21.283 [2024-04-26 21:25:10.440419] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.283 [2024-04-26 21:25:10.440465] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.283 [2024-04-26 21:25:10.440472] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.283 [2024-04-26 21:25:10.440478] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.283 [2024-04-26 21:25:10.440483] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.283 [2024-04-26 21:25:10.440638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.283 [2024-04-26 21:25:10.440719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.283 [2024-04-26 21:25:10.441103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.222 21:25:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:22.222 21:25:11 -- common/autotest_common.sh@850 -- # return 0 00:22:22.222 21:25:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:22.222 21:25:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:22.222 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:22:22.222 21:25:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:22.222 21:25:11 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:22.222 21:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.222 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:22:22.222 [2024-04-26 21:25:11.233137] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.222 21:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.222 21:25:11 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:22.222 21:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.222 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:22:22.222 21:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.222 21:25:11 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.222 21:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.222 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:22:22.222 [2024-04-26 21:25:11.258778] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.222 21:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.222 21:25:11 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:22.222 21:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.222 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:22:22.222 NULL1 00:22:22.222 21:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.222 21:25:11 -- target/connect_stress.sh@21 -- # PERF_PID=86215 00:22:22.222 21:25:11 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:22:22.222 21:25:11 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:22:22.222 21:25:11 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # seq 1 20 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:22:22.222 21:25:11 -- target/connect_stress.sh@28 -- # cat 00:22:22.222 21:25:11 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:22.222 21:25:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:22.222 21:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.222 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:22:22.481 21:25:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.481 21:25:11 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:22.481 21:25:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:22.481 21:25:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.481 21:25:11 -- common/autotest_common.sh@10 -- # set +x 00:22:23.048 21:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.048 21:25:12 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:23.048 21:25:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:23.048 21:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.048 21:25:12 -- common/autotest_common.sh@10 -- # set +x 00:22:23.307 21:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.307 21:25:12 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:23.307 21:25:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:23.307 21:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.307 21:25:12 -- common/autotest_common.sh@10 -- # set +x 00:22:23.565 21:25:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.565 21:25:12 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:23.565 21:25:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:23.565 21:25:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.565 21:25:12 -- common/autotest_common.sh@10 -- # set +x 00:22:23.822 21:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.822 21:25:13 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:23.822 21:25:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:23.822 21:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.822 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:22:24.390 21:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.390 21:25:13 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:24.390 21:25:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:24.390 21:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.390 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:22:24.650 21:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.650 21:25:13 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:24.650 21:25:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:24.650 21:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.650 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:22:24.909 21:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.909 21:25:13 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:24.909 21:25:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:24.909 21:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.909 21:25:13 -- common/autotest_common.sh@10 -- # set +x 00:22:25.167 21:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.167 21:25:14 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:25.167 21:25:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:25.167 21:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.167 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:22:25.425 21:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.425 21:25:14 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:25.425 21:25:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:25.425 21:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.425 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:22:25.992 21:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.992 21:25:14 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:25.992 21:25:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:25.992 21:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.992 21:25:14 -- common/autotest_common.sh@10 -- # set +x 00:22:26.250 21:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.250 21:25:15 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:26.250 21:25:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:26.250 21:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.250 21:25:15 -- common/autotest_common.sh@10 -- # set +x 00:22:26.508 21:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.509 21:25:15 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:26.509 21:25:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:26.509 21:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.509 21:25:15 -- common/autotest_common.sh@10 -- # set +x 00:22:26.778 21:25:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.778 21:25:15 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:26.778 21:25:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:26.778 21:25:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.778 21:25:15 -- common/autotest_common.sh@10 -- # set +x 00:22:27.037 21:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.037 21:25:16 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:27.037 21:25:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:27.037 21:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.037 21:25:16 -- common/autotest_common.sh@10 -- # set +x 00:22:27.604 21:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.604 21:25:16 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:27.604 21:25:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:27.604 21:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.604 21:25:16 -- common/autotest_common.sh@10 -- # set +x 00:22:27.864 21:25:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.864 21:25:16 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:27.864 21:25:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:27.864 21:25:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.864 21:25:16 -- common/autotest_common.sh@10 -- # set +x 00:22:28.122 21:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.122 21:25:17 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:28.122 21:25:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:28.122 21:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.122 21:25:17 -- common/autotest_common.sh@10 -- # set +x 00:22:28.381 21:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.381 21:25:17 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:28.381 21:25:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:28.381 21:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.381 21:25:17 -- common/autotest_common.sh@10 -- # set +x 00:22:28.640 21:25:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.640 21:25:17 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:28.640 21:25:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:28.640 21:25:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.640 21:25:17 -- common/autotest_common.sh@10 -- # set +x 00:22:29.209 21:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.209 21:25:18 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:29.209 21:25:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:29.209 21:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.209 21:25:18 -- common/autotest_common.sh@10 -- # set +x 00:22:29.469 21:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.469 21:25:18 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:29.469 21:25:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:29.469 21:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.469 21:25:18 -- common/autotest_common.sh@10 -- # set +x 00:22:29.728 21:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.728 21:25:18 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:29.728 21:25:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:29.728 21:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.728 21:25:18 -- common/autotest_common.sh@10 -- # set +x 00:22:29.987 21:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.987 21:25:19 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:29.987 21:25:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:29.987 21:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.987 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:22:30.245 21:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.245 21:25:19 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:30.245 21:25:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:30.245 21:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.245 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:22:30.813 21:25:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.813 21:25:19 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:30.813 21:25:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:30.813 21:25:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.813 21:25:19 -- common/autotest_common.sh@10 -- # set +x 00:22:31.072 21:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.072 21:25:20 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:31.072 21:25:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:31.072 21:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.072 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:22:31.331 21:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.331 21:25:20 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:31.331 21:25:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:31.331 21:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.331 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:22:31.593 21:25:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.593 21:25:20 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:31.593 21:25:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:31.593 21:25:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.593 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:22:32.163 21:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.163 21:25:21 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:32.163 21:25:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:32.163 21:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.163 21:25:21 -- common/autotest_common.sh@10 -- # set +x 00:22:32.420 21:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.420 21:25:21 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:32.420 21:25:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:22:32.420 21:25:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.420 21:25:21 -- common/autotest_common.sh@10 -- # set +x 00:22:32.420 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.679 21:25:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.679 21:25:21 -- target/connect_stress.sh@34 -- # kill -0 86215 00:22:32.679 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (86215) - No such process 00:22:32.679 21:25:21 -- target/connect_stress.sh@38 -- # wait 86215 00:22:32.679 21:25:21 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:22:32.679 21:25:21 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:22:32.679 21:25:21 -- target/connect_stress.sh@43 -- # nvmftestfini 00:22:32.679 21:25:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:32.679 21:25:21 -- nvmf/common.sh@117 -- # sync 00:22:32.679 21:25:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:32.679 21:25:21 -- nvmf/common.sh@120 -- # set +e 00:22:32.679 21:25:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.679 21:25:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:32.679 rmmod nvme_tcp 00:22:32.679 rmmod nvme_fabrics 00:22:32.679 rmmod nvme_keyring 00:22:32.679 21:25:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.679 21:25:21 -- nvmf/common.sh@124 -- # set -e 00:22:32.679 21:25:21 -- nvmf/common.sh@125 -- # return 0 00:22:32.679 21:25:21 -- nvmf/common.sh@478 -- # '[' -n 86162 ']' 00:22:32.679 21:25:21 -- nvmf/common.sh@479 -- # killprocess 86162 00:22:32.679 21:25:21 -- common/autotest_common.sh@936 -- # '[' -z 86162 ']' 00:22:32.679 21:25:21 -- common/autotest_common.sh@940 -- # kill -0 86162 00:22:32.679 21:25:21 -- common/autotest_common.sh@941 -- # uname 00:22:32.679 21:25:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:32.679 21:25:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86162 00:22:32.679 killing process with pid 86162 00:22:32.679 21:25:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:32.679 21:25:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:32.679 21:25:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86162' 00:22:32.679 21:25:21 -- common/autotest_common.sh@955 -- # kill 86162 00:22:32.679 21:25:21 -- common/autotest_common.sh@960 -- # wait 86162 00:22:32.939 21:25:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:32.939 21:25:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:32.939 21:25:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:32.939 21:25:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.939 21:25:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.939 21:25:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.939 21:25:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.939 21:25:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.939 21:25:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:32.939 00:22:32.939 real 0m12.492s 00:22:32.939 user 0m42.101s 00:22:32.939 sys 0m2.856s 00:22:32.939 21:25:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:32.939 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:22:32.939 ************************************ 00:22:32.939 END TEST nvmf_connect_stress 00:22:32.939 ************************************ 00:22:33.200 21:25:22 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:22:33.200 21:25:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:33.200 21:25:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:33.200 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:22:33.200 ************************************ 00:22:33.200 START TEST nvmf_fused_ordering 00:22:33.200 ************************************ 00:22:33.200 21:25:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:22:33.200 * Looking for test storage... 00:22:33.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:33.200 21:25:22 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:33.200 21:25:22 -- nvmf/common.sh@7 -- # uname -s 00:22:33.200 21:25:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.200 21:25:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.200 21:25:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.200 21:25:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.200 21:25:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.200 21:25:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.200 21:25:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.200 21:25:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.200 21:25:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.200 21:25:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.200 21:25:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:22:33.200 21:25:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:22:33.200 21:25:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.200 21:25:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.200 21:25:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:33.200 21:25:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.200 21:25:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:33.200 21:25:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.200 21:25:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.200 21:25:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.200 21:25:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.200 21:25:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.200 21:25:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.200 21:25:22 -- paths/export.sh@5 -- # export PATH 00:22:33.200 21:25:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.200 21:25:22 -- nvmf/common.sh@47 -- # : 0 00:22:33.200 21:25:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.200 21:25:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.200 21:25:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.200 21:25:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.200 21:25:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.200 21:25:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.200 21:25:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.200 21:25:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.200 21:25:22 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:22:33.200 21:25:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:33.200 21:25:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.200 21:25:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:33.200 21:25:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:33.200 21:25:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:33.201 21:25:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.201 21:25:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.201 21:25:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.460 21:25:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:33.460 21:25:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:33.460 21:25:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:33.460 21:25:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:33.460 21:25:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:33.460 21:25:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:33.460 21:25:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.460 21:25:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.460 21:25:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:33.460 21:25:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:33.460 21:25:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:33.460 21:25:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:33.460 21:25:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:33.460 21:25:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.460 21:25:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:33.460 21:25:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:33.460 21:25:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:33.460 21:25:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:33.460 21:25:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:33.460 21:25:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:33.460 Cannot find device "nvmf_tgt_br" 00:22:33.460 21:25:22 -- nvmf/common.sh@155 -- # true 00:22:33.460 21:25:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:33.461 Cannot find device "nvmf_tgt_br2" 00:22:33.461 21:25:22 -- nvmf/common.sh@156 -- # true 00:22:33.461 21:25:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:33.461 21:25:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:33.461 Cannot find device "nvmf_tgt_br" 00:22:33.461 21:25:22 -- nvmf/common.sh@158 -- # true 00:22:33.461 21:25:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:33.461 Cannot find device "nvmf_tgt_br2" 00:22:33.461 21:25:22 -- nvmf/common.sh@159 -- # true 00:22:33.461 21:25:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:33.461 21:25:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:33.461 21:25:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:33.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.461 21:25:22 -- nvmf/common.sh@162 -- # true 00:22:33.461 21:25:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:33.461 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:33.461 21:25:22 -- nvmf/common.sh@163 -- # true 00:22:33.461 21:25:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:33.461 21:25:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:33.461 21:25:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:33.461 21:25:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:33.461 21:25:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:33.461 21:25:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:33.461 21:25:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:33.461 21:25:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:33.461 21:25:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:33.721 21:25:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:33.721 21:25:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:33.721 21:25:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:33.721 21:25:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:33.721 21:25:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:33.721 21:25:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:33.721 21:25:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:33.721 21:25:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:33.721 21:25:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:33.721 21:25:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:33.721 21:25:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:33.721 21:25:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:33.721 21:25:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:33.721 21:25:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:33.721 21:25:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:33.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:22:33.721 00:22:33.721 --- 10.0.0.2 ping statistics --- 00:22:33.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.721 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:33.721 21:25:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:33.721 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:33.721 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:22:33.721 00:22:33.721 --- 10.0.0.3 ping statistics --- 00:22:33.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.721 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:22:33.721 21:25:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:33.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:33.721 00:22:33.721 --- 10.0.0.1 ping statistics --- 00:22:33.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.721 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:33.721 21:25:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.721 21:25:22 -- nvmf/common.sh@422 -- # return 0 00:22:33.721 21:25:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:33.721 21:25:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.721 21:25:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:33.721 21:25:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:33.721 21:25:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.721 21:25:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:33.721 21:25:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:33.721 21:25:22 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:22:33.721 21:25:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:33.721 21:25:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:33.721 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:22:33.721 21:25:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:33.721 21:25:22 -- nvmf/common.sh@470 -- # nvmfpid=86548 00:22:33.721 21:25:22 -- nvmf/common.sh@471 -- # waitforlisten 86548 00:22:33.721 21:25:22 -- common/autotest_common.sh@817 -- # '[' -z 86548 ']' 00:22:33.721 21:25:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.721 21:25:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:33.721 21:25:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.721 21:25:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:33.721 21:25:22 -- common/autotest_common.sh@10 -- # set +x 00:22:33.721 [2024-04-26 21:25:22.868155] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:33.721 [2024-04-26 21:25:22.868214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.981 [2024-04-26 21:25:23.010487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.981 [2024-04-26 21:25:23.062875] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.981 [2024-04-26 21:25:23.062945] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.981 [2024-04-26 21:25:23.062953] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.981 [2024-04-26 21:25:23.062958] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.981 [2024-04-26 21:25:23.062964] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.981 [2024-04-26 21:25:23.062987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.549 21:25:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:34.549 21:25:23 -- common/autotest_common.sh@850 -- # return 0 00:22:34.549 21:25:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:34.549 21:25:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:34.549 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:22:34.549 21:25:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.549 21:25:23 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:34.549 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.549 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:22:34.549 [2024-04-26 21:25:23.800959] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.840 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.840 21:25:23 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:34.840 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.840 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:22:34.840 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.840 21:25:23 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.840 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.840 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:22:34.840 [2024-04-26 21:25:23.824999] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.840 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.840 21:25:23 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:34.840 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.840 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:22:34.840 NULL1 00:22:34.840 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.840 21:25:23 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:22:34.840 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.840 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:22:34.840 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.840 21:25:23 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:22:34.840 21:25:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.840 21:25:23 -- common/autotest_common.sh@10 -- # set +x 00:22:34.840 21:25:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.840 21:25:23 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:34.840 [2024-04-26 21:25:23.894679] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:34.840 [2024-04-26 21:25:23.894725] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86598 ] 00:22:35.099 Attached to nqn.2016-06.io.spdk:cnode1 00:22:35.099 Namespace ID: 1 size: 1GB 00:22:35.099 fused_ordering(0) 00:22:35.099 fused_ordering(1) 00:22:35.099 fused_ordering(2) 00:22:35.099 fused_ordering(3) 00:22:35.099 fused_ordering(4) 00:22:35.099 fused_ordering(5) 00:22:35.099 fused_ordering(6) 00:22:35.099 fused_ordering(7) 00:22:35.099 fused_ordering(8) 00:22:35.099 fused_ordering(9) 00:22:35.099 fused_ordering(10) 00:22:35.099 fused_ordering(11) 00:22:35.099 fused_ordering(12) 00:22:35.099 fused_ordering(13) 00:22:35.099 fused_ordering(14) 00:22:35.099 fused_ordering(15) 00:22:35.099 fused_ordering(16) 00:22:35.099 fused_ordering(17) 00:22:35.099 fused_ordering(18) 00:22:35.099 fused_ordering(19) 00:22:35.099 fused_ordering(20) 00:22:35.099 fused_ordering(21) 00:22:35.099 fused_ordering(22) 00:22:35.099 fused_ordering(23) 00:22:35.099 fused_ordering(24) 00:22:35.099 fused_ordering(25) 00:22:35.099 fused_ordering(26) 00:22:35.099 fused_ordering(27) 00:22:35.099 fused_ordering(28) 00:22:35.099 fused_ordering(29) 00:22:35.099 fused_ordering(30) 00:22:35.099 fused_ordering(31) 00:22:35.099 fused_ordering(32) 00:22:35.099 fused_ordering(33) 00:22:35.099 fused_ordering(34) 00:22:35.099 fused_ordering(35) 00:22:35.099 fused_ordering(36) 00:22:35.099 fused_ordering(37) 00:22:35.099 fused_ordering(38) 00:22:35.099 fused_ordering(39) 00:22:35.099 fused_ordering(40) 00:22:35.099 fused_ordering(41) 00:22:35.099 fused_ordering(42) 00:22:35.099 fused_ordering(43) 00:22:35.099 fused_ordering(44) 00:22:35.099 fused_ordering(45) 00:22:35.099 fused_ordering(46) 00:22:35.099 fused_ordering(47) 00:22:35.099 fused_ordering(48) 00:22:35.099 fused_ordering(49) 00:22:35.099 fused_ordering(50) 00:22:35.099 fused_ordering(51) 00:22:35.099 fused_ordering(52) 00:22:35.099 fused_ordering(53) 00:22:35.099 fused_ordering(54) 00:22:35.099 fused_ordering(55) 00:22:35.099 fused_ordering(56) 00:22:35.099 fused_ordering(57) 00:22:35.099 fused_ordering(58) 00:22:35.099 fused_ordering(59) 00:22:35.099 fused_ordering(60) 00:22:35.099 fused_ordering(61) 00:22:35.099 fused_ordering(62) 00:22:35.099 fused_ordering(63) 00:22:35.099 fused_ordering(64) 00:22:35.099 fused_ordering(65) 00:22:35.099 fused_ordering(66) 00:22:35.099 fused_ordering(67) 00:22:35.099 fused_ordering(68) 00:22:35.099 fused_ordering(69) 00:22:35.099 fused_ordering(70) 00:22:35.099 fused_ordering(71) 00:22:35.099 fused_ordering(72) 00:22:35.099 fused_ordering(73) 00:22:35.099 fused_ordering(74) 00:22:35.099 fused_ordering(75) 00:22:35.099 fused_ordering(76) 00:22:35.099 fused_ordering(77) 00:22:35.099 fused_ordering(78) 00:22:35.099 fused_ordering(79) 00:22:35.099 fused_ordering(80) 00:22:35.099 fused_ordering(81) 00:22:35.099 fused_ordering(82) 00:22:35.099 fused_ordering(83) 00:22:35.099 fused_ordering(84) 00:22:35.099 fused_ordering(85) 00:22:35.099 fused_ordering(86) 00:22:35.099 fused_ordering(87) 00:22:35.099 fused_ordering(88) 00:22:35.099 fused_ordering(89) 00:22:35.099 fused_ordering(90) 00:22:35.099 fused_ordering(91) 00:22:35.099 fused_ordering(92) 00:22:35.099 fused_ordering(93) 00:22:35.099 fused_ordering(94) 00:22:35.099 fused_ordering(95) 00:22:35.099 fused_ordering(96) 00:22:35.099 fused_ordering(97) 00:22:35.099 fused_ordering(98) 00:22:35.099 fused_ordering(99) 00:22:35.099 fused_ordering(100) 00:22:35.099 fused_ordering(101) 00:22:35.099 fused_ordering(102) 00:22:35.099 fused_ordering(103) 00:22:35.099 fused_ordering(104) 00:22:35.099 fused_ordering(105) 00:22:35.099 fused_ordering(106) 00:22:35.099 fused_ordering(107) 00:22:35.099 fused_ordering(108) 00:22:35.099 fused_ordering(109) 00:22:35.099 fused_ordering(110) 00:22:35.099 fused_ordering(111) 00:22:35.099 fused_ordering(112) 00:22:35.099 fused_ordering(113) 00:22:35.099 fused_ordering(114) 00:22:35.099 fused_ordering(115) 00:22:35.099 fused_ordering(116) 00:22:35.099 fused_ordering(117) 00:22:35.099 fused_ordering(118) 00:22:35.099 fused_ordering(119) 00:22:35.100 fused_ordering(120) 00:22:35.100 fused_ordering(121) 00:22:35.100 fused_ordering(122) 00:22:35.100 fused_ordering(123) 00:22:35.100 fused_ordering(124) 00:22:35.100 fused_ordering(125) 00:22:35.100 fused_ordering(126) 00:22:35.100 fused_ordering(127) 00:22:35.100 fused_ordering(128) 00:22:35.100 fused_ordering(129) 00:22:35.100 fused_ordering(130) 00:22:35.100 fused_ordering(131) 00:22:35.100 fused_ordering(132) 00:22:35.100 fused_ordering(133) 00:22:35.100 fused_ordering(134) 00:22:35.100 fused_ordering(135) 00:22:35.100 fused_ordering(136) 00:22:35.100 fused_ordering(137) 00:22:35.100 fused_ordering(138) 00:22:35.100 fused_ordering(139) 00:22:35.100 fused_ordering(140) 00:22:35.100 fused_ordering(141) 00:22:35.100 fused_ordering(142) 00:22:35.100 fused_ordering(143) 00:22:35.100 fused_ordering(144) 00:22:35.100 fused_ordering(145) 00:22:35.100 fused_ordering(146) 00:22:35.100 fused_ordering(147) 00:22:35.100 fused_ordering(148) 00:22:35.100 fused_ordering(149) 00:22:35.100 fused_ordering(150) 00:22:35.100 fused_ordering(151) 00:22:35.100 fused_ordering(152) 00:22:35.100 fused_ordering(153) 00:22:35.100 fused_ordering(154) 00:22:35.100 fused_ordering(155) 00:22:35.100 fused_ordering(156) 00:22:35.100 fused_ordering(157) 00:22:35.100 fused_ordering(158) 00:22:35.100 fused_ordering(159) 00:22:35.100 fused_ordering(160) 00:22:35.100 fused_ordering(161) 00:22:35.100 fused_ordering(162) 00:22:35.100 fused_ordering(163) 00:22:35.100 fused_ordering(164) 00:22:35.100 fused_ordering(165) 00:22:35.100 fused_ordering(166) 00:22:35.100 fused_ordering(167) 00:22:35.100 fused_ordering(168) 00:22:35.100 fused_ordering(169) 00:22:35.100 fused_ordering(170) 00:22:35.100 fused_ordering(171) 00:22:35.100 fused_ordering(172) 00:22:35.100 fused_ordering(173) 00:22:35.100 fused_ordering(174) 00:22:35.100 fused_ordering(175) 00:22:35.100 fused_ordering(176) 00:22:35.100 fused_ordering(177) 00:22:35.100 fused_ordering(178) 00:22:35.100 fused_ordering(179) 00:22:35.100 fused_ordering(180) 00:22:35.100 fused_ordering(181) 00:22:35.100 fused_ordering(182) 00:22:35.100 fused_ordering(183) 00:22:35.100 fused_ordering(184) 00:22:35.100 fused_ordering(185) 00:22:35.100 fused_ordering(186) 00:22:35.100 fused_ordering(187) 00:22:35.100 fused_ordering(188) 00:22:35.100 fused_ordering(189) 00:22:35.100 fused_ordering(190) 00:22:35.100 fused_ordering(191) 00:22:35.100 fused_ordering(192) 00:22:35.100 fused_ordering(193) 00:22:35.100 fused_ordering(194) 00:22:35.100 fused_ordering(195) 00:22:35.100 fused_ordering(196) 00:22:35.100 fused_ordering(197) 00:22:35.100 fused_ordering(198) 00:22:35.100 fused_ordering(199) 00:22:35.100 fused_ordering(200) 00:22:35.100 fused_ordering(201) 00:22:35.100 fused_ordering(202) 00:22:35.100 fused_ordering(203) 00:22:35.100 fused_ordering(204) 00:22:35.100 fused_ordering(205) 00:22:35.360 fused_ordering(206) 00:22:35.360 fused_ordering(207) 00:22:35.360 fused_ordering(208) 00:22:35.360 fused_ordering(209) 00:22:35.360 fused_ordering(210) 00:22:35.360 fused_ordering(211) 00:22:35.360 fused_ordering(212) 00:22:35.360 fused_ordering(213) 00:22:35.360 fused_ordering(214) 00:22:35.360 fused_ordering(215) 00:22:35.360 fused_ordering(216) 00:22:35.360 fused_ordering(217) 00:22:35.360 fused_ordering(218) 00:22:35.360 fused_ordering(219) 00:22:35.360 fused_ordering(220) 00:22:35.360 fused_ordering(221) 00:22:35.360 fused_ordering(222) 00:22:35.360 fused_ordering(223) 00:22:35.360 fused_ordering(224) 00:22:35.360 fused_ordering(225) 00:22:35.360 fused_ordering(226) 00:22:35.360 fused_ordering(227) 00:22:35.360 fused_ordering(228) 00:22:35.360 fused_ordering(229) 00:22:35.360 fused_ordering(230) 00:22:35.360 fused_ordering(231) 00:22:35.360 fused_ordering(232) 00:22:35.360 fused_ordering(233) 00:22:35.360 fused_ordering(234) 00:22:35.360 fused_ordering(235) 00:22:35.360 fused_ordering(236) 00:22:35.360 fused_ordering(237) 00:22:35.360 fused_ordering(238) 00:22:35.360 fused_ordering(239) 00:22:35.360 fused_ordering(240) 00:22:35.360 fused_ordering(241) 00:22:35.360 fused_ordering(242) 00:22:35.360 fused_ordering(243) 00:22:35.360 fused_ordering(244) 00:22:35.360 fused_ordering(245) 00:22:35.360 fused_ordering(246) 00:22:35.360 fused_ordering(247) 00:22:35.360 fused_ordering(248) 00:22:35.360 fused_ordering(249) 00:22:35.360 fused_ordering(250) 00:22:35.360 fused_ordering(251) 00:22:35.360 fused_ordering(252) 00:22:35.360 fused_ordering(253) 00:22:35.360 fused_ordering(254) 00:22:35.360 fused_ordering(255) 00:22:35.360 fused_ordering(256) 00:22:35.360 fused_ordering(257) 00:22:35.360 fused_ordering(258) 00:22:35.360 fused_ordering(259) 00:22:35.360 fused_ordering(260) 00:22:35.360 fused_ordering(261) 00:22:35.360 fused_ordering(262) 00:22:35.360 fused_ordering(263) 00:22:35.360 fused_ordering(264) 00:22:35.360 fused_ordering(265) 00:22:35.360 fused_ordering(266) 00:22:35.360 fused_ordering(267) 00:22:35.360 fused_ordering(268) 00:22:35.360 fused_ordering(269) 00:22:35.360 fused_ordering(270) 00:22:35.360 fused_ordering(271) 00:22:35.360 fused_ordering(272) 00:22:35.360 fused_ordering(273) 00:22:35.360 fused_ordering(274) 00:22:35.360 fused_ordering(275) 00:22:35.360 fused_ordering(276) 00:22:35.360 fused_ordering(277) 00:22:35.360 fused_ordering(278) 00:22:35.360 fused_ordering(279) 00:22:35.360 fused_ordering(280) 00:22:35.360 fused_ordering(281) 00:22:35.360 fused_ordering(282) 00:22:35.360 fused_ordering(283) 00:22:35.360 fused_ordering(284) 00:22:35.360 fused_ordering(285) 00:22:35.360 fused_ordering(286) 00:22:35.360 fused_ordering(287) 00:22:35.360 fused_ordering(288) 00:22:35.360 fused_ordering(289) 00:22:35.360 fused_ordering(290) 00:22:35.360 fused_ordering(291) 00:22:35.360 fused_ordering(292) 00:22:35.360 fused_ordering(293) 00:22:35.360 fused_ordering(294) 00:22:35.360 fused_ordering(295) 00:22:35.360 fused_ordering(296) 00:22:35.360 fused_ordering(297) 00:22:35.360 fused_ordering(298) 00:22:35.360 fused_ordering(299) 00:22:35.360 fused_ordering(300) 00:22:35.360 fused_ordering(301) 00:22:35.360 fused_ordering(302) 00:22:35.360 fused_ordering(303) 00:22:35.360 fused_ordering(304) 00:22:35.360 fused_ordering(305) 00:22:35.360 fused_ordering(306) 00:22:35.360 fused_ordering(307) 00:22:35.360 fused_ordering(308) 00:22:35.360 fused_ordering(309) 00:22:35.360 fused_ordering(310) 00:22:35.360 fused_ordering(311) 00:22:35.360 fused_ordering(312) 00:22:35.360 fused_ordering(313) 00:22:35.360 fused_ordering(314) 00:22:35.360 fused_ordering(315) 00:22:35.360 fused_ordering(316) 00:22:35.360 fused_ordering(317) 00:22:35.360 fused_ordering(318) 00:22:35.360 fused_ordering(319) 00:22:35.360 fused_ordering(320) 00:22:35.360 fused_ordering(321) 00:22:35.360 fused_ordering(322) 00:22:35.360 fused_ordering(323) 00:22:35.360 fused_ordering(324) 00:22:35.360 fused_ordering(325) 00:22:35.360 fused_ordering(326) 00:22:35.360 fused_ordering(327) 00:22:35.360 fused_ordering(328) 00:22:35.360 fused_ordering(329) 00:22:35.360 fused_ordering(330) 00:22:35.360 fused_ordering(331) 00:22:35.360 fused_ordering(332) 00:22:35.360 fused_ordering(333) 00:22:35.360 fused_ordering(334) 00:22:35.360 fused_ordering(335) 00:22:35.360 fused_ordering(336) 00:22:35.360 fused_ordering(337) 00:22:35.360 fused_ordering(338) 00:22:35.360 fused_ordering(339) 00:22:35.360 fused_ordering(340) 00:22:35.360 fused_ordering(341) 00:22:35.360 fused_ordering(342) 00:22:35.360 fused_ordering(343) 00:22:35.360 fused_ordering(344) 00:22:35.360 fused_ordering(345) 00:22:35.360 fused_ordering(346) 00:22:35.360 fused_ordering(347) 00:22:35.360 fused_ordering(348) 00:22:35.360 fused_ordering(349) 00:22:35.360 fused_ordering(350) 00:22:35.360 fused_ordering(351) 00:22:35.360 fused_ordering(352) 00:22:35.360 fused_ordering(353) 00:22:35.360 fused_ordering(354) 00:22:35.360 fused_ordering(355) 00:22:35.360 fused_ordering(356) 00:22:35.360 fused_ordering(357) 00:22:35.360 fused_ordering(358) 00:22:35.360 fused_ordering(359) 00:22:35.360 fused_ordering(360) 00:22:35.360 fused_ordering(361) 00:22:35.360 fused_ordering(362) 00:22:35.360 fused_ordering(363) 00:22:35.360 fused_ordering(364) 00:22:35.360 fused_ordering(365) 00:22:35.360 fused_ordering(366) 00:22:35.360 fused_ordering(367) 00:22:35.360 fused_ordering(368) 00:22:35.360 fused_ordering(369) 00:22:35.360 fused_ordering(370) 00:22:35.360 fused_ordering(371) 00:22:35.360 fused_ordering(372) 00:22:35.360 fused_ordering(373) 00:22:35.360 fused_ordering(374) 00:22:35.360 fused_ordering(375) 00:22:35.360 fused_ordering(376) 00:22:35.360 fused_ordering(377) 00:22:35.360 fused_ordering(378) 00:22:35.360 fused_ordering(379) 00:22:35.360 fused_ordering(380) 00:22:35.360 fused_ordering(381) 00:22:35.360 fused_ordering(382) 00:22:35.361 fused_ordering(383) 00:22:35.361 fused_ordering(384) 00:22:35.361 fused_ordering(385) 00:22:35.361 fused_ordering(386) 00:22:35.361 fused_ordering(387) 00:22:35.361 fused_ordering(388) 00:22:35.361 fused_ordering(389) 00:22:35.361 fused_ordering(390) 00:22:35.361 fused_ordering(391) 00:22:35.361 fused_ordering(392) 00:22:35.361 fused_ordering(393) 00:22:35.361 fused_ordering(394) 00:22:35.361 fused_ordering(395) 00:22:35.361 fused_ordering(396) 00:22:35.361 fused_ordering(397) 00:22:35.361 fused_ordering(398) 00:22:35.361 fused_ordering(399) 00:22:35.361 fused_ordering(400) 00:22:35.361 fused_ordering(401) 00:22:35.361 fused_ordering(402) 00:22:35.361 fused_ordering(403) 00:22:35.361 fused_ordering(404) 00:22:35.361 fused_ordering(405) 00:22:35.361 fused_ordering(406) 00:22:35.361 fused_ordering(407) 00:22:35.361 fused_ordering(408) 00:22:35.361 fused_ordering(409) 00:22:35.361 fused_ordering(410) 00:22:35.621 fused_ordering(411) 00:22:35.621 fused_ordering(412) 00:22:35.621 fused_ordering(413) 00:22:35.621 fused_ordering(414) 00:22:35.621 fused_ordering(415) 00:22:35.621 fused_ordering(416) 00:22:35.621 fused_ordering(417) 00:22:35.621 fused_ordering(418) 00:22:35.621 fused_ordering(419) 00:22:35.621 fused_ordering(420) 00:22:35.621 fused_ordering(421) 00:22:35.621 fused_ordering(422) 00:22:35.621 fused_ordering(423) 00:22:35.621 fused_ordering(424) 00:22:35.621 fused_ordering(425) 00:22:35.621 fused_ordering(426) 00:22:35.621 fused_ordering(427) 00:22:35.621 fused_ordering(428) 00:22:35.621 fused_ordering(429) 00:22:35.621 fused_ordering(430) 00:22:35.621 fused_ordering(431) 00:22:35.621 fused_ordering(432) 00:22:35.621 fused_ordering(433) 00:22:35.621 fused_ordering(434) 00:22:35.621 fused_ordering(435) 00:22:35.621 fused_ordering(436) 00:22:35.621 fused_ordering(437) 00:22:35.621 fused_ordering(438) 00:22:35.621 fused_ordering(439) 00:22:35.621 fused_ordering(440) 00:22:35.621 fused_ordering(441) 00:22:35.621 fused_ordering(442) 00:22:35.621 fused_ordering(443) 00:22:35.621 fused_ordering(444) 00:22:35.621 fused_ordering(445) 00:22:35.621 fused_ordering(446) 00:22:35.621 fused_ordering(447) 00:22:35.621 fused_ordering(448) 00:22:35.621 fused_ordering(449) 00:22:35.621 fused_ordering(450) 00:22:35.621 fused_ordering(451) 00:22:35.621 fused_ordering(452) 00:22:35.621 fused_ordering(453) 00:22:35.621 fused_ordering(454) 00:22:35.621 fused_ordering(455) 00:22:35.621 fused_ordering(456) 00:22:35.621 fused_ordering(457) 00:22:35.621 fused_ordering(458) 00:22:35.621 fused_ordering(459) 00:22:35.621 fused_ordering(460) 00:22:35.621 fused_ordering(461) 00:22:35.621 fused_ordering(462) 00:22:35.621 fused_ordering(463) 00:22:35.621 fused_ordering(464) 00:22:35.621 fused_ordering(465) 00:22:35.621 fused_ordering(466) 00:22:35.621 fused_ordering(467) 00:22:35.621 fused_ordering(468) 00:22:35.621 fused_ordering(469) 00:22:35.621 fused_ordering(470) 00:22:35.621 fused_ordering(471) 00:22:35.621 fused_ordering(472) 00:22:35.621 fused_ordering(473) 00:22:35.621 fused_ordering(474) 00:22:35.621 fused_ordering(475) 00:22:35.621 fused_ordering(476) 00:22:35.621 fused_ordering(477) 00:22:35.621 fused_ordering(478) 00:22:35.621 fused_ordering(479) 00:22:35.621 fused_ordering(480) 00:22:35.621 fused_ordering(481) 00:22:35.621 fused_ordering(482) 00:22:35.621 fused_ordering(483) 00:22:35.621 fused_ordering(484) 00:22:35.621 fused_ordering(485) 00:22:35.621 fused_ordering(486) 00:22:35.621 fused_ordering(487) 00:22:35.621 fused_ordering(488) 00:22:35.621 fused_ordering(489) 00:22:35.621 fused_ordering(490) 00:22:35.621 fused_ordering(491) 00:22:35.621 fused_ordering(492) 00:22:35.621 fused_ordering(493) 00:22:35.621 fused_ordering(494) 00:22:35.621 fused_ordering(495) 00:22:35.621 fused_ordering(496) 00:22:35.621 fused_ordering(497) 00:22:35.621 fused_ordering(498) 00:22:35.621 fused_ordering(499) 00:22:35.621 fused_ordering(500) 00:22:35.621 fused_ordering(501) 00:22:35.621 fused_ordering(502) 00:22:35.621 fused_ordering(503) 00:22:35.621 fused_ordering(504) 00:22:35.621 fused_ordering(505) 00:22:35.621 fused_ordering(506) 00:22:35.621 fused_ordering(507) 00:22:35.621 fused_ordering(508) 00:22:35.621 fused_ordering(509) 00:22:35.621 fused_ordering(510) 00:22:35.621 fused_ordering(511) 00:22:35.621 fused_ordering(512) 00:22:35.621 fused_ordering(513) 00:22:35.621 fused_ordering(514) 00:22:35.621 fused_ordering(515) 00:22:35.621 fused_ordering(516) 00:22:35.621 fused_ordering(517) 00:22:35.621 fused_ordering(518) 00:22:35.621 fused_ordering(519) 00:22:35.621 fused_ordering(520) 00:22:35.621 fused_ordering(521) 00:22:35.621 fused_ordering(522) 00:22:35.621 fused_ordering(523) 00:22:35.621 fused_ordering(524) 00:22:35.621 fused_ordering(525) 00:22:35.621 fused_ordering(526) 00:22:35.621 fused_ordering(527) 00:22:35.621 fused_ordering(528) 00:22:35.621 fused_ordering(529) 00:22:35.621 fused_ordering(530) 00:22:35.621 fused_ordering(531) 00:22:35.621 fused_ordering(532) 00:22:35.621 fused_ordering(533) 00:22:35.621 fused_ordering(534) 00:22:35.621 fused_ordering(535) 00:22:35.621 fused_ordering(536) 00:22:35.621 fused_ordering(537) 00:22:35.621 fused_ordering(538) 00:22:35.621 fused_ordering(539) 00:22:35.621 fused_ordering(540) 00:22:35.621 fused_ordering(541) 00:22:35.621 fused_ordering(542) 00:22:35.621 fused_ordering(543) 00:22:35.621 fused_ordering(544) 00:22:35.621 fused_ordering(545) 00:22:35.621 fused_ordering(546) 00:22:35.621 fused_ordering(547) 00:22:35.621 fused_ordering(548) 00:22:35.621 fused_ordering(549) 00:22:35.621 fused_ordering(550) 00:22:35.621 fused_ordering(551) 00:22:35.621 fused_ordering(552) 00:22:35.621 fused_ordering(553) 00:22:35.621 fused_ordering(554) 00:22:35.621 fused_ordering(555) 00:22:35.621 fused_ordering(556) 00:22:35.621 fused_ordering(557) 00:22:35.621 fused_ordering(558) 00:22:35.621 fused_ordering(559) 00:22:35.621 fused_ordering(560) 00:22:35.621 fused_ordering(561) 00:22:35.621 fused_ordering(562) 00:22:35.621 fused_ordering(563) 00:22:35.621 fused_ordering(564) 00:22:35.621 fused_ordering(565) 00:22:35.621 fused_ordering(566) 00:22:35.621 fused_ordering(567) 00:22:35.621 fused_ordering(568) 00:22:35.621 fused_ordering(569) 00:22:35.621 fused_ordering(570) 00:22:35.621 fused_ordering(571) 00:22:35.621 fused_ordering(572) 00:22:35.621 fused_ordering(573) 00:22:35.621 fused_ordering(574) 00:22:35.621 fused_ordering(575) 00:22:35.621 fused_ordering(576) 00:22:35.621 fused_ordering(577) 00:22:35.621 fused_ordering(578) 00:22:35.621 fused_ordering(579) 00:22:35.621 fused_ordering(580) 00:22:35.621 fused_ordering(581) 00:22:35.621 fused_ordering(582) 00:22:35.621 fused_ordering(583) 00:22:35.621 fused_ordering(584) 00:22:35.621 fused_ordering(585) 00:22:35.621 fused_ordering(586) 00:22:35.621 fused_ordering(587) 00:22:35.621 fused_ordering(588) 00:22:35.621 fused_ordering(589) 00:22:35.621 fused_ordering(590) 00:22:35.621 fused_ordering(591) 00:22:35.621 fused_ordering(592) 00:22:35.621 fused_ordering(593) 00:22:35.621 fused_ordering(594) 00:22:35.621 fused_ordering(595) 00:22:35.621 fused_ordering(596) 00:22:35.621 fused_ordering(597) 00:22:35.621 fused_ordering(598) 00:22:35.621 fused_ordering(599) 00:22:35.621 fused_ordering(600) 00:22:35.621 fused_ordering(601) 00:22:35.621 fused_ordering(602) 00:22:35.621 fused_ordering(603) 00:22:35.621 fused_ordering(604) 00:22:35.621 fused_ordering(605) 00:22:35.621 fused_ordering(606) 00:22:35.621 fused_ordering(607) 00:22:35.621 fused_ordering(608) 00:22:35.621 fused_ordering(609) 00:22:35.621 fused_ordering(610) 00:22:35.621 fused_ordering(611) 00:22:35.621 fused_ordering(612) 00:22:35.621 fused_ordering(613) 00:22:35.621 fused_ordering(614) 00:22:35.621 fused_ordering(615) 00:22:35.881 fused_ordering(616) 00:22:35.881 fused_ordering(617) 00:22:35.881 fused_ordering(618) 00:22:35.881 fused_ordering(619) 00:22:35.881 fused_ordering(620) 00:22:35.881 fused_ordering(621) 00:22:35.881 fused_ordering(622) 00:22:35.881 fused_ordering(623) 00:22:35.881 fused_ordering(624) 00:22:35.881 fused_ordering(625) 00:22:35.881 fused_ordering(626) 00:22:35.881 fused_ordering(627) 00:22:35.881 fused_ordering(628) 00:22:35.881 fused_ordering(629) 00:22:35.881 fused_ordering(630) 00:22:35.881 fused_ordering(631) 00:22:35.881 fused_ordering(632) 00:22:35.881 fused_ordering(633) 00:22:35.881 fused_ordering(634) 00:22:35.881 fused_ordering(635) 00:22:35.881 fused_ordering(636) 00:22:35.881 fused_ordering(637) 00:22:35.881 fused_ordering(638) 00:22:35.881 fused_ordering(639) 00:22:35.881 fused_ordering(640) 00:22:35.881 fused_ordering(641) 00:22:35.881 fused_ordering(642) 00:22:35.881 fused_ordering(643) 00:22:35.881 fused_ordering(644) 00:22:35.881 fused_ordering(645) 00:22:35.881 fused_ordering(646) 00:22:35.881 fused_ordering(647) 00:22:35.881 fused_ordering(648) 00:22:35.881 fused_ordering(649) 00:22:35.881 fused_ordering(650) 00:22:35.881 fused_ordering(651) 00:22:35.881 fused_ordering(652) 00:22:35.881 fused_ordering(653) 00:22:35.881 fused_ordering(654) 00:22:35.881 fused_ordering(655) 00:22:35.881 fused_ordering(656) 00:22:35.881 fused_ordering(657) 00:22:35.881 fused_ordering(658) 00:22:35.881 fused_ordering(659) 00:22:35.881 fused_ordering(660) 00:22:35.881 fused_ordering(661) 00:22:35.881 fused_ordering(662) 00:22:35.881 fused_ordering(663) 00:22:35.881 fused_ordering(664) 00:22:35.881 fused_ordering(665) 00:22:35.881 fused_ordering(666) 00:22:35.881 fused_ordering(667) 00:22:35.881 fused_ordering(668) 00:22:35.881 fused_ordering(669) 00:22:35.881 fused_ordering(670) 00:22:35.881 fused_ordering(671) 00:22:35.881 fused_ordering(672) 00:22:35.881 fused_ordering(673) 00:22:35.881 fused_ordering(674) 00:22:35.881 fused_ordering(675) 00:22:35.881 fused_ordering(676) 00:22:35.881 fused_ordering(677) 00:22:35.881 fused_ordering(678) 00:22:35.881 fused_ordering(679) 00:22:35.881 fused_ordering(680) 00:22:35.881 fused_ordering(681) 00:22:35.881 fused_ordering(682) 00:22:35.881 fused_ordering(683) 00:22:35.881 fused_ordering(684) 00:22:35.881 fused_ordering(685) 00:22:35.881 fused_ordering(686) 00:22:35.881 fused_ordering(687) 00:22:35.881 fused_ordering(688) 00:22:35.881 fused_ordering(689) 00:22:35.881 fused_ordering(690) 00:22:35.881 fused_ordering(691) 00:22:35.881 fused_ordering(692) 00:22:35.881 fused_ordering(693) 00:22:35.881 fused_ordering(694) 00:22:35.881 fused_ordering(695) 00:22:35.881 fused_ordering(696) 00:22:35.881 fused_ordering(697) 00:22:35.881 fused_ordering(698) 00:22:35.881 fused_ordering(699) 00:22:35.881 fused_ordering(700) 00:22:35.881 fused_ordering(701) 00:22:35.881 fused_ordering(702) 00:22:35.881 fused_ordering(703) 00:22:35.881 fused_ordering(704) 00:22:35.881 fused_ordering(705) 00:22:35.881 fused_ordering(706) 00:22:35.881 fused_ordering(707) 00:22:35.881 fused_ordering(708) 00:22:35.881 fused_ordering(709) 00:22:35.881 fused_ordering(710) 00:22:35.881 fused_ordering(711) 00:22:35.881 fused_ordering(712) 00:22:35.881 fused_ordering(713) 00:22:35.881 fused_ordering(714) 00:22:35.881 fused_ordering(715) 00:22:35.881 fused_ordering(716) 00:22:35.881 fused_ordering(717) 00:22:35.881 fused_ordering(718) 00:22:35.881 fused_ordering(719) 00:22:35.881 fused_ordering(720) 00:22:35.881 fused_ordering(721) 00:22:35.881 fused_ordering(722) 00:22:35.881 fused_ordering(723) 00:22:35.881 fused_ordering(724) 00:22:35.881 fused_ordering(725) 00:22:35.881 fused_ordering(726) 00:22:35.881 fused_ordering(727) 00:22:35.881 fused_ordering(728) 00:22:35.881 fused_ordering(729) 00:22:35.881 fused_ordering(730) 00:22:35.881 fused_ordering(731) 00:22:35.881 fused_ordering(732) 00:22:35.881 fused_ordering(733) 00:22:35.881 fused_ordering(734) 00:22:35.881 fused_ordering(735) 00:22:35.881 fused_ordering(736) 00:22:35.881 fused_ordering(737) 00:22:35.881 fused_ordering(738) 00:22:35.881 fused_ordering(739) 00:22:35.881 fused_ordering(740) 00:22:35.881 fused_ordering(741) 00:22:35.881 fused_ordering(742) 00:22:35.881 fused_ordering(743) 00:22:35.881 fused_ordering(744) 00:22:35.881 fused_ordering(745) 00:22:35.881 fused_ordering(746) 00:22:35.881 fused_ordering(747) 00:22:35.881 fused_ordering(748) 00:22:35.881 fused_ordering(749) 00:22:35.881 fused_ordering(750) 00:22:35.881 fused_ordering(751) 00:22:35.881 fused_ordering(752) 00:22:35.881 fused_ordering(753) 00:22:35.881 fused_ordering(754) 00:22:35.881 fused_ordering(755) 00:22:35.881 fused_ordering(756) 00:22:35.881 fused_ordering(757) 00:22:35.881 fused_ordering(758) 00:22:35.881 fused_ordering(759) 00:22:35.881 fused_ordering(760) 00:22:35.881 fused_ordering(761) 00:22:35.881 fused_ordering(762) 00:22:35.881 fused_ordering(763) 00:22:35.881 fused_ordering(764) 00:22:35.881 fused_ordering(765) 00:22:35.881 fused_ordering(766) 00:22:35.881 fused_ordering(767) 00:22:35.881 fused_ordering(768) 00:22:35.881 fused_ordering(769) 00:22:35.881 fused_ordering(770) 00:22:35.881 fused_ordering(771) 00:22:35.881 fused_ordering(772) 00:22:35.881 fused_ordering(773) 00:22:35.881 fused_ordering(774) 00:22:35.881 fused_ordering(775) 00:22:35.881 fused_ordering(776) 00:22:35.881 fused_ordering(777) 00:22:35.881 fused_ordering(778) 00:22:35.881 fused_ordering(779) 00:22:35.881 fused_ordering(780) 00:22:35.881 fused_ordering(781) 00:22:35.881 fused_ordering(782) 00:22:35.881 fused_ordering(783) 00:22:35.881 fused_ordering(784) 00:22:35.881 fused_ordering(785) 00:22:35.881 fused_ordering(786) 00:22:35.881 fused_ordering(787) 00:22:35.881 fused_ordering(788) 00:22:35.881 fused_ordering(789) 00:22:35.881 fused_ordering(790) 00:22:35.881 fused_ordering(791) 00:22:35.881 fused_ordering(792) 00:22:35.881 fused_ordering(793) 00:22:35.881 fused_ordering(794) 00:22:35.881 fused_ordering(795) 00:22:35.881 fused_ordering(796) 00:22:35.881 fused_ordering(797) 00:22:35.881 fused_ordering(798) 00:22:35.881 fused_ordering(799) 00:22:35.881 fused_ordering(800) 00:22:35.881 fused_ordering(801) 00:22:35.881 fused_ordering(802) 00:22:35.881 fused_ordering(803) 00:22:35.881 fused_ordering(804) 00:22:35.881 fused_ordering(805) 00:22:35.881 fused_ordering(806) 00:22:35.881 fused_ordering(807) 00:22:35.881 fused_ordering(808) 00:22:35.881 fused_ordering(809) 00:22:35.881 fused_ordering(810) 00:22:35.881 fused_ordering(811) 00:22:35.881 fused_ordering(812) 00:22:35.881 fused_ordering(813) 00:22:35.881 fused_ordering(814) 00:22:35.881 fused_ordering(815) 00:22:35.881 fused_ordering(816) 00:22:35.881 fused_ordering(817) 00:22:35.881 fused_ordering(818) 00:22:35.881 fused_ordering(819) 00:22:35.881 fused_ordering(820) 00:22:36.451 fused_ordering(821) 00:22:36.451 fused_ordering(822) 00:22:36.451 fused_ordering(823) 00:22:36.451 fused_ordering(824) 00:22:36.451 fused_ordering(825) 00:22:36.451 fused_ordering(826) 00:22:36.451 fused_ordering(827) 00:22:36.451 fused_ordering(828) 00:22:36.451 fused_ordering(829) 00:22:36.451 fused_ordering(830) 00:22:36.451 fused_ordering(831) 00:22:36.451 fused_ordering(832) 00:22:36.451 fused_ordering(833) 00:22:36.451 fused_ordering(834) 00:22:36.451 fused_ordering(835) 00:22:36.451 fused_ordering(836) 00:22:36.451 fused_ordering(837) 00:22:36.451 fused_ordering(838) 00:22:36.451 fused_ordering(839) 00:22:36.451 fused_ordering(840) 00:22:36.451 fused_ordering(841) 00:22:36.451 fused_ordering(842) 00:22:36.451 fused_ordering(843) 00:22:36.451 fused_ordering(844) 00:22:36.451 fused_ordering(845) 00:22:36.451 fused_ordering(846) 00:22:36.451 fused_ordering(847) 00:22:36.451 fused_ordering(848) 00:22:36.451 fused_ordering(849) 00:22:36.451 fused_ordering(850) 00:22:36.451 fused_ordering(851) 00:22:36.451 fused_ordering(852) 00:22:36.451 fused_ordering(853) 00:22:36.451 fused_ordering(854) 00:22:36.451 fused_ordering(855) 00:22:36.451 fused_ordering(856) 00:22:36.451 fused_ordering(857) 00:22:36.451 fused_ordering(858) 00:22:36.451 fused_ordering(859) 00:22:36.451 fused_ordering(860) 00:22:36.451 fused_ordering(861) 00:22:36.451 fused_ordering(862) 00:22:36.451 fused_ordering(863) 00:22:36.451 fused_ordering(864) 00:22:36.451 fused_ordering(865) 00:22:36.451 fused_ordering(866) 00:22:36.451 fused_ordering(867) 00:22:36.451 fused_ordering(868) 00:22:36.451 fused_ordering(869) 00:22:36.451 fused_ordering(870) 00:22:36.451 fused_ordering(871) 00:22:36.451 fused_ordering(872) 00:22:36.451 fused_ordering(873) 00:22:36.451 fused_ordering(874) 00:22:36.451 fused_ordering(875) 00:22:36.451 fused_ordering(876) 00:22:36.451 fused_ordering(877) 00:22:36.451 fused_ordering(878) 00:22:36.451 fused_ordering(879) 00:22:36.451 fused_ordering(880) 00:22:36.451 fused_ordering(881) 00:22:36.451 fused_ordering(882) 00:22:36.451 fused_ordering(883) 00:22:36.451 fused_ordering(884) 00:22:36.451 fused_ordering(885) 00:22:36.451 fused_ordering(886) 00:22:36.451 fused_ordering(887) 00:22:36.451 fused_ordering(888) 00:22:36.451 fused_ordering(889) 00:22:36.451 fused_ordering(890) 00:22:36.451 fused_ordering(891) 00:22:36.451 fused_ordering(892) 00:22:36.451 fused_ordering(893) 00:22:36.451 fused_ordering(894) 00:22:36.451 fused_ordering(895) 00:22:36.451 fused_ordering(896) 00:22:36.451 fused_ordering(897) 00:22:36.451 fused_ordering(898) 00:22:36.451 fused_ordering(899) 00:22:36.451 fused_ordering(900) 00:22:36.451 fused_ordering(901) 00:22:36.451 fused_ordering(902) 00:22:36.451 fused_ordering(903) 00:22:36.451 fused_ordering(904) 00:22:36.451 fused_ordering(905) 00:22:36.451 fused_ordering(906) 00:22:36.451 fused_ordering(907) 00:22:36.451 fused_ordering(908) 00:22:36.451 fused_ordering(909) 00:22:36.451 fused_ordering(910) 00:22:36.451 fused_ordering(911) 00:22:36.451 fused_ordering(912) 00:22:36.451 fused_ordering(913) 00:22:36.451 fused_ordering(914) 00:22:36.451 fused_ordering(915) 00:22:36.451 fused_ordering(916) 00:22:36.451 fused_ordering(917) 00:22:36.451 fused_ordering(918) 00:22:36.451 fused_ordering(919) 00:22:36.451 fused_ordering(920) 00:22:36.451 fused_ordering(921) 00:22:36.452 fused_ordering(922) 00:22:36.452 fused_ordering(923) 00:22:36.452 fused_ordering(924) 00:22:36.452 fused_ordering(925) 00:22:36.452 fused_ordering(926) 00:22:36.452 fused_ordering(927) 00:22:36.452 fused_ordering(928) 00:22:36.452 fused_ordering(929) 00:22:36.452 fused_ordering(930) 00:22:36.452 fused_ordering(931) 00:22:36.452 fused_ordering(932) 00:22:36.452 fused_ordering(933) 00:22:36.452 fused_ordering(934) 00:22:36.452 fused_ordering(935) 00:22:36.452 fused_ordering(936) 00:22:36.452 fused_ordering(937) 00:22:36.452 fused_ordering(938) 00:22:36.452 fused_ordering(939) 00:22:36.452 fused_ordering(940) 00:22:36.452 fused_ordering(941) 00:22:36.452 fused_ordering(942) 00:22:36.452 fused_ordering(943) 00:22:36.452 fused_ordering(944) 00:22:36.452 fused_ordering(945) 00:22:36.452 fused_ordering(946) 00:22:36.452 fused_ordering(947) 00:22:36.452 fused_ordering(948) 00:22:36.452 fused_ordering(949) 00:22:36.452 fused_ordering(950) 00:22:36.452 fused_ordering(951) 00:22:36.452 fused_ordering(952) 00:22:36.452 fused_ordering(953) 00:22:36.452 fused_ordering(954) 00:22:36.452 fused_ordering(955) 00:22:36.452 fused_ordering(956) 00:22:36.452 fused_ordering(957) 00:22:36.452 fused_ordering(958) 00:22:36.452 fused_ordering(959) 00:22:36.452 fused_ordering(960) 00:22:36.452 fused_ordering(961) 00:22:36.452 fused_ordering(962) 00:22:36.452 fused_ordering(963) 00:22:36.452 fused_ordering(964) 00:22:36.452 fused_ordering(965) 00:22:36.452 fused_ordering(966) 00:22:36.452 fused_ordering(967) 00:22:36.452 fused_ordering(968) 00:22:36.452 fused_ordering(969) 00:22:36.452 fused_ordering(970) 00:22:36.452 fused_ordering(971) 00:22:36.452 fused_ordering(972) 00:22:36.452 fused_ordering(973) 00:22:36.452 fused_ordering(974) 00:22:36.452 fused_ordering(975) 00:22:36.452 fused_ordering(976) 00:22:36.452 fused_ordering(977) 00:22:36.452 fused_ordering(978) 00:22:36.452 fused_ordering(979) 00:22:36.452 fused_ordering(980) 00:22:36.452 fused_ordering(981) 00:22:36.452 fused_ordering(982) 00:22:36.452 fused_ordering(983) 00:22:36.452 fused_ordering(984) 00:22:36.452 fused_ordering(985) 00:22:36.452 fused_ordering(986) 00:22:36.452 fused_ordering(987) 00:22:36.452 fused_ordering(988) 00:22:36.452 fused_ordering(989) 00:22:36.452 fused_ordering(990) 00:22:36.452 fused_ordering(991) 00:22:36.452 fused_ordering(992) 00:22:36.452 fused_ordering(993) 00:22:36.452 fused_ordering(994) 00:22:36.452 fused_ordering(995) 00:22:36.452 fused_ordering(996) 00:22:36.452 fused_ordering(997) 00:22:36.452 fused_ordering(998) 00:22:36.452 fused_ordering(999) 00:22:36.452 fused_ordering(1000) 00:22:36.452 fused_ordering(1001) 00:22:36.452 fused_ordering(1002) 00:22:36.452 fused_ordering(1003) 00:22:36.452 fused_ordering(1004) 00:22:36.452 fused_ordering(1005) 00:22:36.452 fused_ordering(1006) 00:22:36.452 fused_ordering(1007) 00:22:36.452 fused_ordering(1008) 00:22:36.452 fused_ordering(1009) 00:22:36.452 fused_ordering(1010) 00:22:36.452 fused_ordering(1011) 00:22:36.452 fused_ordering(1012) 00:22:36.452 fused_ordering(1013) 00:22:36.452 fused_ordering(1014) 00:22:36.452 fused_ordering(1015) 00:22:36.452 fused_ordering(1016) 00:22:36.452 fused_ordering(1017) 00:22:36.452 fused_ordering(1018) 00:22:36.452 fused_ordering(1019) 00:22:36.452 fused_ordering(1020) 00:22:36.452 fused_ordering(1021) 00:22:36.452 fused_ordering(1022) 00:22:36.452 fused_ordering(1023) 00:22:36.452 21:25:25 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:22:36.452 21:25:25 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:22:36.452 21:25:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:36.452 21:25:25 -- nvmf/common.sh@117 -- # sync 00:22:36.452 21:25:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.452 21:25:25 -- nvmf/common.sh@120 -- # set +e 00:22:36.452 21:25:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.452 21:25:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.452 rmmod nvme_tcp 00:22:36.452 rmmod nvme_fabrics 00:22:36.452 rmmod nvme_keyring 00:22:36.452 21:25:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.452 21:25:25 -- nvmf/common.sh@124 -- # set -e 00:22:36.452 21:25:25 -- nvmf/common.sh@125 -- # return 0 00:22:36.452 21:25:25 -- nvmf/common.sh@478 -- # '[' -n 86548 ']' 00:22:36.452 21:25:25 -- nvmf/common.sh@479 -- # killprocess 86548 00:22:36.452 21:25:25 -- common/autotest_common.sh@936 -- # '[' -z 86548 ']' 00:22:36.452 21:25:25 -- common/autotest_common.sh@940 -- # kill -0 86548 00:22:36.452 21:25:25 -- common/autotest_common.sh@941 -- # uname 00:22:36.452 21:25:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.452 21:25:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86548 00:22:36.452 21:25:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:36.452 21:25:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:36.452 killing process with pid 86548 00:22:36.452 21:25:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86548' 00:22:36.452 21:25:25 -- common/autotest_common.sh@955 -- # kill 86548 00:22:36.452 21:25:25 -- common/autotest_common.sh@960 -- # wait 86548 00:22:36.711 21:25:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:36.711 21:25:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:36.711 21:25:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:36.711 21:25:25 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:36.711 21:25:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:36.711 21:25:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.711 21:25:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.711 21:25:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.711 21:25:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:36.711 ************************************ 00:22:36.711 END TEST nvmf_fused_ordering 00:22:36.711 ************************************ 00:22:36.711 00:22:36.711 real 0m3.522s 00:22:36.711 user 0m4.006s 00:22:36.711 sys 0m1.130s 00:22:36.711 21:25:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:36.711 21:25:25 -- common/autotest_common.sh@10 -- # set +x 00:22:36.711 21:25:25 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:22:36.711 21:25:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:36.711 21:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:36.711 21:25:25 -- common/autotest_common.sh@10 -- # set +x 00:22:36.711 ************************************ 00:22:36.711 START TEST nvmf_delete_subsystem 00:22:36.711 ************************************ 00:22:36.711 21:25:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:22:36.970 * Looking for test storage... 00:22:36.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:36.970 21:25:26 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:36.970 21:25:26 -- nvmf/common.sh@7 -- # uname -s 00:22:36.970 21:25:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.970 21:25:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.970 21:25:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.970 21:25:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.970 21:25:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.970 21:25:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.970 21:25:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.970 21:25:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.970 21:25:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.970 21:25:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.970 21:25:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:22:36.970 21:25:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:22:36.970 21:25:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.970 21:25:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.970 21:25:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:36.970 21:25:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.970 21:25:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:36.970 21:25:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.970 21:25:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.970 21:25:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.970 21:25:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.970 21:25:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.970 21:25:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.970 21:25:26 -- paths/export.sh@5 -- # export PATH 00:22:36.970 21:25:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.970 21:25:26 -- nvmf/common.sh@47 -- # : 0 00:22:36.970 21:25:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.970 21:25:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.970 21:25:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.970 21:25:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.970 21:25:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.970 21:25:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.970 21:25:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.970 21:25:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.970 21:25:26 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:22:36.970 21:25:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:36.970 21:25:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.970 21:25:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:36.970 21:25:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:36.970 21:25:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:36.970 21:25:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.970 21:25:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.971 21:25:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.971 21:25:26 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:36.971 21:25:26 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:36.971 21:25:26 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:36.971 21:25:26 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:36.971 21:25:26 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:36.971 21:25:26 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:36.971 21:25:26 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.971 21:25:26 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.971 21:25:26 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:36.971 21:25:26 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:36.971 21:25:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:36.971 21:25:26 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:36.971 21:25:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:36.971 21:25:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.971 21:25:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:36.971 21:25:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:36.971 21:25:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:36.971 21:25:26 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:36.971 21:25:26 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:36.971 21:25:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:36.971 Cannot find device "nvmf_tgt_br" 00:22:36.971 21:25:26 -- nvmf/common.sh@155 -- # true 00:22:36.971 21:25:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:36.971 Cannot find device "nvmf_tgt_br2" 00:22:36.971 21:25:26 -- nvmf/common.sh@156 -- # true 00:22:36.971 21:25:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:36.971 21:25:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:36.971 Cannot find device "nvmf_tgt_br" 00:22:36.971 21:25:26 -- nvmf/common.sh@158 -- # true 00:22:36.971 21:25:26 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:36.971 Cannot find device "nvmf_tgt_br2" 00:22:36.971 21:25:26 -- nvmf/common.sh@159 -- # true 00:22:36.971 21:25:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:37.231 21:25:26 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:37.231 21:25:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:37.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.231 21:25:26 -- nvmf/common.sh@162 -- # true 00:22:37.231 21:25:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:37.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:37.231 21:25:26 -- nvmf/common.sh@163 -- # true 00:22:37.231 21:25:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:37.231 21:25:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:37.231 21:25:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:37.231 21:25:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:37.231 21:25:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:37.231 21:25:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:37.231 21:25:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:37.231 21:25:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:37.231 21:25:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:37.231 21:25:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:37.231 21:25:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:37.231 21:25:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:37.231 21:25:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:37.231 21:25:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:37.231 21:25:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:37.231 21:25:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:37.231 21:25:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:37.231 21:25:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:37.231 21:25:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:37.231 21:25:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:37.231 21:25:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:37.231 21:25:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:37.231 21:25:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:37.231 21:25:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:37.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:22:37.231 00:22:37.231 --- 10.0.0.2 ping statistics --- 00:22:37.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.231 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:37.231 21:25:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:37.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:37.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:22:37.231 00:22:37.231 --- 10.0.0.3 ping statistics --- 00:22:37.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.231 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:37.231 21:25:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:37.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:22:37.231 00:22:37.231 --- 10.0.0.1 ping statistics --- 00:22:37.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.231 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:37.231 21:25:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.231 21:25:26 -- nvmf/common.sh@422 -- # return 0 00:22:37.231 21:25:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:37.231 21:25:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.231 21:25:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:37.231 21:25:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:37.231 21:25:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.231 21:25:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:37.231 21:25:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:37.231 21:25:26 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:22:37.231 21:25:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:37.231 21:25:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:37.231 21:25:26 -- common/autotest_common.sh@10 -- # set +x 00:22:37.231 21:25:26 -- nvmf/common.sh@470 -- # nvmfpid=86783 00:22:37.231 21:25:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:37.231 21:25:26 -- nvmf/common.sh@471 -- # waitforlisten 86783 00:22:37.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.231 21:25:26 -- common/autotest_common.sh@817 -- # '[' -z 86783 ']' 00:22:37.231 21:25:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.231 21:25:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:37.231 21:25:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.231 21:25:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:37.231 21:25:26 -- common/autotest_common.sh@10 -- # set +x 00:22:37.231 [2024-04-26 21:25:26.459195] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:37.231 [2024-04-26 21:25:26.459265] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.492 [2024-04-26 21:25:26.603107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:37.492 [2024-04-26 21:25:26.656772] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.492 [2024-04-26 21:25:26.656820] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.492 [2024-04-26 21:25:26.656826] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.492 [2024-04-26 21:25:26.656831] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.492 [2024-04-26 21:25:26.656836] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.492 [2024-04-26 21:25:26.656937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.492 [2024-04-26 21:25:26.656944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.430 21:25:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:38.430 21:25:27 -- common/autotest_common.sh@850 -- # return 0 00:22:38.430 21:25:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:38.430 21:25:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:38.430 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.430 21:25:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.430 21:25:27 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.430 21:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.430 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.430 [2024-04-26 21:25:27.418944] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.430 21:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.430 21:25:27 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:38.430 21:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.430 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.430 21:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.430 21:25:27 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.430 21:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.430 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.430 [2024-04-26 21:25:27.442998] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.430 21:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.430 21:25:27 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:22:38.430 21:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.430 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.430 NULL1 00:22:38.430 21:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.430 21:25:27 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:38.430 21:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.430 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.430 Delay0 00:22:38.430 21:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.430 21:25:27 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:38.430 21:25:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.430 21:25:27 -- common/autotest_common.sh@10 -- # set +x 00:22:38.430 21:25:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.430 21:25:27 -- target/delete_subsystem.sh@28 -- # perf_pid=86835 00:22:38.430 21:25:27 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:22:38.430 21:25:27 -- target/delete_subsystem.sh@30 -- # sleep 2 00:22:38.430 [2024-04-26 21:25:27.659097] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:40.336 21:25:29 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.336 21:25:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.336 21:25:29 -- common/autotest_common.sh@10 -- # set +x 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 starting I/O failed: -6 00:22:40.596 [2024-04-26 21:25:29.686748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5468d0 is same with the state(5) to be set 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Write completed with error (sct=0, sc=8) 00:22:40.596 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 [2024-04-26 21:25:29.687967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x564bd0 is same with the state(5) to be set 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 starting I/O failed: -6 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 starting I/O failed: -6 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 starting I/O failed: -6 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 starting I/O failed: -6 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 starting I/O failed: -6 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 starting I/O failed: -6 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 starting I/O failed: -6 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 starting I/O failed: -6 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 starting I/O failed: -6 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 [2024-04-26 21:25:29.692063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efcc8000c00 is same with the state(5) to be set 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Read completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:40.597 Write completed with error (sct=0, sc=8) 00:22:41.584 [2024-04-26 21:25:30.671612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54bf80 is same with the state(5) to be set 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 [2024-04-26 21:25:30.685769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x546740 is same with the state(5) to be set 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 [2024-04-26 21:25:30.686162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x546b90 is same with the state(5) to be set 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Write completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.584 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 [2024-04-26 21:25:30.688437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efcc800bf90 is same with the state(5) to be set 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Write completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 Read completed with error (sct=0, sc=8) 00:22:41.585 [2024-04-26 21:25:30.688659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efcc800c690 is same with the state(5) to be set 00:22:41.585 [2024-04-26 21:25:30.689471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54bf80 (9): Bad file descriptor 00:22:41.585 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:41.585 21:25:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.585 21:25:30 -- target/delete_subsystem.sh@34 -- # delay=0 00:22:41.585 21:25:30 -- target/delete_subsystem.sh@35 -- # kill -0 86835 00:22:41.585 21:25:30 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:22:41.585 Initializing NVMe Controllers 00:22:41.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.585 Controller IO queue size 128, less than required. 00:22:41.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:41.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:22:41.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:22:41.585 Initialization complete. Launching workers. 00:22:41.585 ======================================================== 00:22:41.585 Latency(us) 00:22:41.585 Device Information : IOPS MiB/s Average min max 00:22:41.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.28 0.08 904179.94 990.26 1007505.59 00:22:41.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 148.35 0.07 1112372.79 298.84 2001870.08 00:22:41.585 ======================================================== 00:22:41.585 Total : 313.63 0.15 1002658.46 298.84 2001870.08 00:22:41.585 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@35 -- # kill -0 86835 00:22:42.180 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (86835) - No such process 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@45 -- # NOT wait 86835 00:22:42.180 21:25:31 -- common/autotest_common.sh@638 -- # local es=0 00:22:42.180 21:25:31 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 86835 00:22:42.180 21:25:31 -- common/autotest_common.sh@626 -- # local arg=wait 00:22:42.180 21:25:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:42.180 21:25:31 -- common/autotest_common.sh@630 -- # type -t wait 00:22:42.180 21:25:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:42.180 21:25:31 -- common/autotest_common.sh@641 -- # wait 86835 00:22:42.180 21:25:31 -- common/autotest_common.sh@641 -- # es=1 00:22:42.180 21:25:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:42.180 21:25:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:42.180 21:25:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:42.180 21:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.180 21:25:31 -- common/autotest_common.sh@10 -- # set +x 00:22:42.180 21:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.180 21:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.180 21:25:31 -- common/autotest_common.sh@10 -- # set +x 00:22:42.180 [2024-04-26 21:25:31.217533] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.180 21:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:42.180 21:25:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.180 21:25:31 -- common/autotest_common.sh@10 -- # set +x 00:22:42.180 21:25:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@54 -- # perf_pid=86879 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@56 -- # delay=0 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@57 -- # kill -0 86879 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:42.180 21:25:31 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:22:42.180 [2024-04-26 21:25:31.406676] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:42.749 21:25:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:42.749 21:25:31 -- target/delete_subsystem.sh@57 -- # kill -0 86879 00:22:42.749 21:25:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:43.008 21:25:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:43.008 21:25:32 -- target/delete_subsystem.sh@57 -- # kill -0 86879 00:22:43.008 21:25:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:43.577 21:25:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:43.577 21:25:32 -- target/delete_subsystem.sh@57 -- # kill -0 86879 00:22:43.577 21:25:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:44.146 21:25:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:44.146 21:25:33 -- target/delete_subsystem.sh@57 -- # kill -0 86879 00:22:44.146 21:25:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:44.716 21:25:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:44.717 21:25:33 -- target/delete_subsystem.sh@57 -- # kill -0 86879 00:22:44.717 21:25:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:45.285 21:25:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:45.285 21:25:34 -- target/delete_subsystem.sh@57 -- # kill -0 86879 00:22:45.285 21:25:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:22:45.285 Initializing NVMe Controllers 00:22:45.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.285 Controller IO queue size 128, less than required. 00:22:45.285 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:22:45.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:22:45.285 Initialization complete. Launching workers. 00:22:45.285 ======================================================== 00:22:45.285 Latency(us) 00:22:45.285 Device Information : IOPS MiB/s Average min max 00:22:45.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002748.18 1000121.33 1009592.38 00:22:45.285 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004776.39 1000242.08 1042224.69 00:22:45.285 ======================================================== 00:22:45.285 Total : 256.00 0.12 1003762.28 1000121.33 1042224.69 00:22:45.285 00:22:45.544 21:25:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:22:45.544 21:25:34 -- target/delete_subsystem.sh@57 -- # kill -0 86879 00:22:45.544 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (86879) - No such process 00:22:45.544 21:25:34 -- target/delete_subsystem.sh@67 -- # wait 86879 00:22:45.544 21:25:34 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:45.544 21:25:34 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:22:45.544 21:25:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:45.544 21:25:34 -- nvmf/common.sh@117 -- # sync 00:22:46.112 21:25:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.112 21:25:35 -- nvmf/common.sh@120 -- # set +e 00:22:46.112 21:25:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.112 21:25:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.112 rmmod nvme_tcp 00:22:46.112 rmmod nvme_fabrics 00:22:46.112 rmmod nvme_keyring 00:22:46.112 21:25:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.112 21:25:35 -- nvmf/common.sh@124 -- # set -e 00:22:46.112 21:25:35 -- nvmf/common.sh@125 -- # return 0 00:22:46.112 21:25:35 -- nvmf/common.sh@478 -- # '[' -n 86783 ']' 00:22:46.112 21:25:35 -- nvmf/common.sh@479 -- # killprocess 86783 00:22:46.112 21:25:35 -- common/autotest_common.sh@936 -- # '[' -z 86783 ']' 00:22:46.112 21:25:35 -- common/autotest_common.sh@940 -- # kill -0 86783 00:22:46.112 21:25:35 -- common/autotest_common.sh@941 -- # uname 00:22:46.112 21:25:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.112 21:25:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86783 00:22:46.112 killing process with pid 86783 00:22:46.112 21:25:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:46.112 21:25:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:46.112 21:25:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86783' 00:22:46.112 21:25:35 -- common/autotest_common.sh@955 -- # kill 86783 00:22:46.112 21:25:35 -- common/autotest_common.sh@960 -- # wait 86783 00:22:46.370 21:25:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:46.370 21:25:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:46.370 21:25:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:46.370 21:25:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.370 21:25:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.370 21:25:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.370 21:25:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.370 21:25:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.370 21:25:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:46.370 00:22:46.371 real 0m9.632s 00:22:46.371 user 0m29.888s 00:22:46.371 sys 0m1.124s 00:22:46.371 21:25:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:46.371 21:25:35 -- common/autotest_common.sh@10 -- # set +x 00:22:46.371 ************************************ 00:22:46.371 END TEST nvmf_delete_subsystem 00:22:46.371 ************************************ 00:22:46.630 21:25:35 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:22:46.630 21:25:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:46.630 21:25:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:46.630 21:25:35 -- common/autotest_common.sh@10 -- # set +x 00:22:46.630 ************************************ 00:22:46.630 START TEST nvmf_ns_masking 00:22:46.630 ************************************ 00:22:46.630 21:25:35 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:22:46.630 * Looking for test storage... 00:22:46.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:46.630 21:25:35 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:46.630 21:25:35 -- nvmf/common.sh@7 -- # uname -s 00:22:46.630 21:25:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.630 21:25:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.630 21:25:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.630 21:25:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.630 21:25:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.630 21:25:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.630 21:25:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.630 21:25:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.630 21:25:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.630 21:25:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.630 21:25:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:22:46.630 21:25:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:22:46.630 21:25:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.630 21:25:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.630 21:25:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:46.630 21:25:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.630 21:25:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:46.630 21:25:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.630 21:25:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.630 21:25:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.630 21:25:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.630 21:25:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.630 21:25:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.630 21:25:35 -- paths/export.sh@5 -- # export PATH 00:22:46.630 21:25:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.630 21:25:35 -- nvmf/common.sh@47 -- # : 0 00:22:46.630 21:25:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.630 21:25:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.630 21:25:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.630 21:25:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.630 21:25:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.630 21:25:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.630 21:25:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.630 21:25:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.889 21:25:35 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:46.889 21:25:35 -- target/ns_masking.sh@11 -- # loops=5 00:22:46.889 21:25:35 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:22:46.889 21:25:35 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:22:46.889 21:25:35 -- target/ns_masking.sh@15 -- # uuidgen 00:22:46.889 21:25:35 -- target/ns_masking.sh@15 -- # HOSTID=cacfcf2d-b6af-4af9-a31a-acfc7c264ab6 00:22:46.889 21:25:35 -- target/ns_masking.sh@44 -- # nvmftestinit 00:22:46.889 21:25:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:46.889 21:25:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.889 21:25:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:46.889 21:25:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:46.889 21:25:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:46.889 21:25:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.889 21:25:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.889 21:25:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.889 21:25:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:46.889 21:25:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:46.889 21:25:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:46.889 21:25:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:46.889 21:25:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:46.889 21:25:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:46.889 21:25:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.889 21:25:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.889 21:25:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:46.889 21:25:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:46.889 21:25:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:46.889 21:25:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:46.889 21:25:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:46.889 21:25:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.889 21:25:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:46.889 21:25:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:46.889 21:25:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:46.889 21:25:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:46.889 21:25:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:46.889 21:25:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:46.889 Cannot find device "nvmf_tgt_br" 00:22:46.889 21:25:35 -- nvmf/common.sh@155 -- # true 00:22:46.889 21:25:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:46.889 Cannot find device "nvmf_tgt_br2" 00:22:46.889 21:25:35 -- nvmf/common.sh@156 -- # true 00:22:46.889 21:25:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:46.889 21:25:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:46.889 Cannot find device "nvmf_tgt_br" 00:22:46.889 21:25:35 -- nvmf/common.sh@158 -- # true 00:22:46.889 21:25:35 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:46.889 Cannot find device "nvmf_tgt_br2" 00:22:46.889 21:25:35 -- nvmf/common.sh@159 -- # true 00:22:46.889 21:25:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:46.889 21:25:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:46.889 21:25:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:46.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.889 21:25:36 -- nvmf/common.sh@162 -- # true 00:22:46.889 21:25:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:46.889 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.889 21:25:36 -- nvmf/common.sh@163 -- # true 00:22:46.889 21:25:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:46.889 21:25:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:46.889 21:25:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:46.889 21:25:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:46.889 21:25:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:47.149 21:25:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:47.149 21:25:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:47.149 21:25:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:47.149 21:25:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:47.149 21:25:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:47.149 21:25:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:47.149 21:25:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:47.149 21:25:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:47.149 21:25:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:47.149 21:25:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:47.149 21:25:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:47.149 21:25:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:47.149 21:25:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:47.149 21:25:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:47.149 21:25:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:47.149 21:25:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:47.149 21:25:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:47.149 21:25:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:47.149 21:25:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:47.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:22:47.149 00:22:47.149 --- 10.0.0.2 ping statistics --- 00:22:47.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.149 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:22:47.149 21:25:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:47.149 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:47.149 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:22:47.149 00:22:47.149 --- 10.0.0.3 ping statistics --- 00:22:47.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.149 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:47.149 21:25:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:47.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:22:47.149 00:22:47.149 --- 10.0.0.1 ping statistics --- 00:22:47.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.149 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:22:47.149 21:25:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.149 21:25:36 -- nvmf/common.sh@422 -- # return 0 00:22:47.149 21:25:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:47.149 21:25:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.149 21:25:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:47.149 21:25:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:47.149 21:25:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.149 21:25:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:47.149 21:25:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:47.149 21:25:36 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:22:47.149 21:25:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:47.149 21:25:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:47.149 21:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:47.149 21:25:36 -- nvmf/common.sh@470 -- # nvmfpid=87126 00:22:47.149 21:25:36 -- nvmf/common.sh@471 -- # waitforlisten 87126 00:22:47.149 21:25:36 -- common/autotest_common.sh@817 -- # '[' -z 87126 ']' 00:22:47.149 21:25:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.149 21:25:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:47.149 21:25:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:47.149 21:25:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.149 21:25:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:47.149 21:25:36 -- common/autotest_common.sh@10 -- # set +x 00:22:47.409 [2024-04-26 21:25:36.408470] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:47.409 [2024-04-26 21:25:36.408539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.409 [2024-04-26 21:25:36.550723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.409 [2024-04-26 21:25:36.603592] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.409 [2024-04-26 21:25:36.603648] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.409 [2024-04-26 21:25:36.603666] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.409 [2024-04-26 21:25:36.603672] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.409 [2024-04-26 21:25:36.603677] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.409 [2024-04-26 21:25:36.603907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.409 [2024-04-26 21:25:36.604103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.409 [2024-04-26 21:25:36.604031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.409 [2024-04-26 21:25:36.604104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.355 21:25:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:48.355 21:25:37 -- common/autotest_common.sh@850 -- # return 0 00:22:48.355 21:25:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:48.355 21:25:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:48.355 21:25:37 -- common/autotest_common.sh@10 -- # set +x 00:22:48.355 21:25:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.355 21:25:37 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:48.355 [2024-04-26 21:25:37.542142] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.355 21:25:37 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:22:48.355 21:25:37 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:22:48.355 21:25:37 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:48.615 Malloc1 00:22:48.615 21:25:37 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:22:48.874 Malloc2 00:22:48.874 21:25:38 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:49.133 21:25:38 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:22:49.390 21:25:38 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.648 [2024-04-26 21:25:38.730411] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.648 21:25:38 -- target/ns_masking.sh@61 -- # connect 00:22:49.648 21:25:38 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cacfcf2d-b6af-4af9-a31a-acfc7c264ab6 -a 10.0.0.2 -s 4420 -i 4 00:22:49.648 21:25:38 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:22:49.648 21:25:38 -- common/autotest_common.sh@1184 -- # local i=0 00:22:49.648 21:25:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:49.648 21:25:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:49.648 21:25:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:52.184 21:25:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:52.184 21:25:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:52.184 21:25:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:52.184 21:25:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:52.184 21:25:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:52.184 21:25:40 -- common/autotest_common.sh@1194 -- # return 0 00:22:52.184 21:25:40 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:22:52.184 21:25:40 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:22:52.184 21:25:40 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:22:52.184 21:25:40 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:22:52.184 21:25:40 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:22:52.184 21:25:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:52.184 21:25:40 -- target/ns_masking.sh@39 -- # grep 0x1 00:22:52.184 [ 0]:0x1 00:22:52.185 21:25:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:52.185 21:25:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:52.185 21:25:41 -- target/ns_masking.sh@40 -- # nguid=eb0b3b30c8ae4a89ad1d0a5e6884d24c 00:22:52.185 21:25:41 -- target/ns_masking.sh@41 -- # [[ eb0b3b30c8ae4a89ad1d0a5e6884d24c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:52.185 21:25:41 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:22:52.185 21:25:41 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:22:52.185 21:25:41 -- target/ns_masking.sh@39 -- # grep 0x1 00:22:52.185 21:25:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:52.185 [ 0]:0x1 00:22:52.185 21:25:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:52.185 21:25:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:52.185 21:25:41 -- target/ns_masking.sh@40 -- # nguid=eb0b3b30c8ae4a89ad1d0a5e6884d24c 00:22:52.185 21:25:41 -- target/ns_masking.sh@41 -- # [[ eb0b3b30c8ae4a89ad1d0a5e6884d24c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:52.185 21:25:41 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:22:52.185 21:25:41 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:52.185 21:25:41 -- target/ns_masking.sh@39 -- # grep 0x2 00:22:52.185 [ 1]:0x2 00:22:52.185 21:25:41 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:52.185 21:25:41 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:52.185 21:25:41 -- target/ns_masking.sh@40 -- # nguid=fd65dbe5f9c844b78d64bb87def3f651 00:22:52.185 21:25:41 -- target/ns_masking.sh@41 -- # [[ fd65dbe5f9c844b78d64bb87def3f651 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:52.185 21:25:41 -- target/ns_masking.sh@69 -- # disconnect 00:22:52.185 21:25:41 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:52.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:52.185 21:25:41 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:52.444 21:25:41 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:22:52.703 21:25:41 -- target/ns_masking.sh@77 -- # connect 1 00:22:52.703 21:25:41 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cacfcf2d-b6af-4af9-a31a-acfc7c264ab6 -a 10.0.0.2 -s 4420 -i 4 00:22:52.703 21:25:41 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:22:52.703 21:25:41 -- common/autotest_common.sh@1184 -- # local i=0 00:22:52.703 21:25:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:52.703 21:25:41 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:22:52.703 21:25:41 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:22:52.703 21:25:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:55.252 21:25:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:55.252 21:25:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:55.252 21:25:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:55.252 21:25:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:55.252 21:25:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:55.252 21:25:43 -- common/autotest_common.sh@1194 -- # return 0 00:22:55.252 21:25:43 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:22:55.252 21:25:43 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:22:55.252 21:25:43 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:22:55.252 21:25:43 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:22:55.252 21:25:43 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:22:55.252 21:25:43 -- common/autotest_common.sh@638 -- # local es=0 00:22:55.252 21:25:43 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:22:55.252 21:25:43 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:22:55.252 21:25:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:55.252 21:25:43 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:22:55.252 21:25:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:55.252 21:25:43 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:22:55.252 21:25:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:55.252 21:25:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:22:55.252 21:25:44 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:55.252 21:25:44 -- common/autotest_common.sh@641 -- # es=1 00:22:55.252 21:25:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:55.252 21:25:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:55.252 21:25:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:55.252 21:25:44 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:22:55.252 21:25:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:22:55.252 21:25:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:55.252 [ 0]:0x2 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # nguid=fd65dbe5f9c844b78d64bb87def3f651 00:22:55.252 21:25:44 -- target/ns_masking.sh@41 -- # [[ fd65dbe5f9c844b78d64bb87def3f651 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:55.252 21:25:44 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:55.252 21:25:44 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:22:55.252 21:25:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:55.252 21:25:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:22:55.252 [ 0]:0x1 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # nguid=eb0b3b30c8ae4a89ad1d0a5e6884d24c 00:22:55.252 21:25:44 -- target/ns_masking.sh@41 -- # [[ eb0b3b30c8ae4a89ad1d0a5e6884d24c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:55.252 21:25:44 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:22:55.252 21:25:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:55.252 21:25:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:22:55.252 [ 1]:0x2 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:55.252 21:25:44 -- target/ns_masking.sh@40 -- # nguid=fd65dbe5f9c844b78d64bb87def3f651 00:22:55.252 21:25:44 -- target/ns_masking.sh@41 -- # [[ fd65dbe5f9c844b78d64bb87def3f651 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:55.253 21:25:44 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:55.512 21:25:44 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:22:55.512 21:25:44 -- common/autotest_common.sh@638 -- # local es=0 00:22:55.512 21:25:44 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:22:55.512 21:25:44 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:22:55.512 21:25:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:55.512 21:25:44 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:22:55.512 21:25:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:55.512 21:25:44 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:22:55.512 21:25:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:55.512 21:25:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:22:55.512 21:25:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:55.512 21:25:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:55.512 21:25:44 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:22:55.512 21:25:44 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:55.512 21:25:44 -- common/autotest_common.sh@641 -- # es=1 00:22:55.512 21:25:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:55.512 21:25:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:55.512 21:25:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:55.512 21:25:44 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:22:55.512 21:25:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:22:55.512 21:25:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:55.512 [ 0]:0x2 00:22:55.512 21:25:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:55.512 21:25:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:55.512 21:25:44 -- target/ns_masking.sh@40 -- # nguid=fd65dbe5f9c844b78d64bb87def3f651 00:22:55.512 21:25:44 -- target/ns_masking.sh@41 -- # [[ fd65dbe5f9c844b78d64bb87def3f651 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:55.512 21:25:44 -- target/ns_masking.sh@91 -- # disconnect 00:22:55.512 21:25:44 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:55.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:55.513 21:25:44 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:55.772 21:25:44 -- target/ns_masking.sh@95 -- # connect 2 00:22:55.772 21:25:44 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cacfcf2d-b6af-4af9-a31a-acfc7c264ab6 -a 10.0.0.2 -s 4420 -i 4 00:22:56.031 21:25:45 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:22:56.031 21:25:45 -- common/autotest_common.sh@1184 -- # local i=0 00:22:56.031 21:25:45 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:56.031 21:25:45 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:22:56.031 21:25:45 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:22:56.031 21:25:45 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:57.935 21:25:47 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:57.935 21:25:47 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:57.935 21:25:47 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:57.935 21:25:47 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:22:57.935 21:25:47 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:57.935 21:25:47 -- common/autotest_common.sh@1194 -- # return 0 00:22:57.936 21:25:47 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:22:57.936 21:25:47 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:22:57.936 21:25:47 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:22:57.936 21:25:47 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:22:57.936 21:25:47 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:22:57.936 21:25:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:57.936 21:25:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:22:57.936 [ 0]:0x1 00:22:57.936 21:25:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:57.936 21:25:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:58.194 21:25:47 -- target/ns_masking.sh@40 -- # nguid=eb0b3b30c8ae4a89ad1d0a5e6884d24c 00:22:58.194 21:25:47 -- target/ns_masking.sh@41 -- # [[ eb0b3b30c8ae4a89ad1d0a5e6884d24c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:58.194 21:25:47 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:22:58.194 21:25:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:58.194 21:25:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:22:58.194 [ 1]:0x2 00:22:58.194 21:25:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:58.194 21:25:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:58.194 21:25:47 -- target/ns_masking.sh@40 -- # nguid=fd65dbe5f9c844b78d64bb87def3f651 00:22:58.194 21:25:47 -- target/ns_masking.sh@41 -- # [[ fd65dbe5f9c844b78d64bb87def3f651 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:58.195 21:25:47 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:22:58.455 21:25:47 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:22:58.455 21:25:47 -- common/autotest_common.sh@638 -- # local es=0 00:22:58.455 21:25:47 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:22:58.455 21:25:47 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:22:58.455 21:25:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:58.455 21:25:47 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:22:58.455 21:25:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:58.456 21:25:47 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:22:58.456 21:25:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:58.456 21:25:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:22:58.456 21:25:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:58.456 21:25:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:58.456 21:25:47 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:22:58.456 21:25:47 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:58.456 21:25:47 -- common/autotest_common.sh@641 -- # es=1 00:22:58.456 21:25:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:58.456 21:25:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:58.456 21:25:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:58.456 21:25:47 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:22:58.456 21:25:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:58.456 21:25:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:22:58.456 [ 0]:0x2 00:22:58.456 21:25:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:58.456 21:25:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:58.456 21:25:47 -- target/ns_masking.sh@40 -- # nguid=fd65dbe5f9c844b78d64bb87def3f651 00:22:58.456 21:25:47 -- target/ns_masking.sh@41 -- # [[ fd65dbe5f9c844b78d64bb87def3f651 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:58.456 21:25:47 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:22:58.456 21:25:47 -- common/autotest_common.sh@638 -- # local es=0 00:22:58.456 21:25:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:22:58.456 21:25:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.456 21:25:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:58.456 21:25:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.456 21:25:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:58.456 21:25:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.456 21:25:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:58.456 21:25:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.456 21:25:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:58.456 21:25:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:22:58.715 [2024-04-26 21:25:47.842987] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:22:58.715 2024/04/26 21:25:47 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:22:58.715 request: 00:22:58.715 { 00:22:58.715 "method": "nvmf_ns_remove_host", 00:22:58.715 "params": { 00:22:58.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.715 "nsid": 2, 00:22:58.715 "host": "nqn.2016-06.io.spdk:host1" 00:22:58.715 } 00:22:58.715 } 00:22:58.715 Got JSON-RPC error response 00:22:58.715 GoRPCClient: error on JSON-RPC call 00:22:58.715 21:25:47 -- common/autotest_common.sh@641 -- # es=1 00:22:58.715 21:25:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:58.715 21:25:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:58.715 21:25:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:58.715 21:25:47 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:22:58.715 21:25:47 -- common/autotest_common.sh@638 -- # local es=0 00:22:58.715 21:25:47 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:22:58.715 21:25:47 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:22:58.715 21:25:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:58.715 21:25:47 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:22:58.715 21:25:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:58.715 21:25:47 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:22:58.715 21:25:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:58.715 21:25:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:22:58.715 21:25:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:58.715 21:25:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:22:58.715 21:25:47 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:22:58.715 21:25:47 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:58.715 21:25:47 -- common/autotest_common.sh@641 -- # es=1 00:22:58.716 21:25:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:58.716 21:25:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:58.716 21:25:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:58.716 21:25:47 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:22:58.716 21:25:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:22:58.716 21:25:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:22:58.716 [ 0]:0x2 00:22:58.716 21:25:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:22:58.716 21:25:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:22:58.716 21:25:47 -- target/ns_masking.sh@40 -- # nguid=fd65dbe5f9c844b78d64bb87def3f651 00:22:58.716 21:25:47 -- target/ns_masking.sh@41 -- # [[ fd65dbe5f9c844b78d64bb87def3f651 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:22:58.716 21:25:47 -- target/ns_masking.sh@108 -- # disconnect 00:22:58.716 21:25:47 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:58.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:58.974 21:25:47 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.974 21:25:48 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:58.974 21:25:48 -- target/ns_masking.sh@114 -- # nvmftestfini 00:22:58.974 21:25:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:58.974 21:25:48 -- nvmf/common.sh@117 -- # sync 00:22:59.235 21:25:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:59.235 21:25:48 -- nvmf/common.sh@120 -- # set +e 00:22:59.235 21:25:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.235 21:25:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:59.235 rmmod nvme_tcp 00:22:59.235 rmmod nvme_fabrics 00:22:59.235 rmmod nvme_keyring 00:22:59.235 21:25:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.235 21:25:48 -- nvmf/common.sh@124 -- # set -e 00:22:59.235 21:25:48 -- nvmf/common.sh@125 -- # return 0 00:22:59.235 21:25:48 -- nvmf/common.sh@478 -- # '[' -n 87126 ']' 00:22:59.235 21:25:48 -- nvmf/common.sh@479 -- # killprocess 87126 00:22:59.235 21:25:48 -- common/autotest_common.sh@936 -- # '[' -z 87126 ']' 00:22:59.235 21:25:48 -- common/autotest_common.sh@940 -- # kill -0 87126 00:22:59.235 21:25:48 -- common/autotest_common.sh@941 -- # uname 00:22:59.235 21:25:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:59.235 21:25:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87126 00:22:59.235 killing process with pid 87126 00:22:59.235 21:25:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:59.235 21:25:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:59.235 21:25:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87126' 00:22:59.235 21:25:48 -- common/autotest_common.sh@955 -- # kill 87126 00:22:59.235 21:25:48 -- common/autotest_common.sh@960 -- # wait 87126 00:22:59.495 21:25:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:59.495 21:25:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:59.495 21:25:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:59.495 21:25:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.495 21:25:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.495 21:25:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.495 21:25:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.495 21:25:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.495 21:25:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:59.495 00:22:59.495 real 0m12.925s 00:22:59.495 user 0m51.122s 00:22:59.495 sys 0m2.049s 00:22:59.495 21:25:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:59.495 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:59.495 ************************************ 00:22:59.495 END TEST nvmf_ns_masking 00:22:59.495 ************************************ 00:22:59.495 21:25:48 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:22:59.495 21:25:48 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:22:59.495 21:25:48 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:22:59.495 21:25:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:59.495 21:25:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:59.495 21:25:48 -- common/autotest_common.sh@10 -- # set +x 00:22:59.755 ************************************ 00:22:59.755 START TEST nvmf_host_management 00:22:59.755 ************************************ 00:22:59.755 21:25:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:22:59.755 * Looking for test storage... 00:22:59.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:59.755 21:25:48 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:59.755 21:25:48 -- nvmf/common.sh@7 -- # uname -s 00:22:59.755 21:25:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.755 21:25:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.755 21:25:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.755 21:25:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.755 21:25:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.755 21:25:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.755 21:25:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.755 21:25:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.755 21:25:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.755 21:25:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.755 21:25:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:22:59.755 21:25:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:22:59.755 21:25:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.755 21:25:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.755 21:25:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:59.755 21:25:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.755 21:25:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:59.755 21:25:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.755 21:25:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.755 21:25:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.755 21:25:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.755 21:25:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.755 21:25:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.755 21:25:48 -- paths/export.sh@5 -- # export PATH 00:22:59.755 21:25:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.755 21:25:48 -- nvmf/common.sh@47 -- # : 0 00:22:59.755 21:25:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.755 21:25:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.755 21:25:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.755 21:25:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.755 21:25:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.755 21:25:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.755 21:25:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.755 21:25:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.755 21:25:48 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:59.755 21:25:48 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:59.755 21:25:48 -- target/host_management.sh@105 -- # nvmftestinit 00:22:59.755 21:25:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:59.755 21:25:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.755 21:25:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:59.755 21:25:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:59.755 21:25:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:59.755 21:25:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.755 21:25:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.755 21:25:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.755 21:25:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:59.755 21:25:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:59.755 21:25:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:59.755 21:25:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:59.755 21:25:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:59.755 21:25:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:59.755 21:25:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.755 21:25:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.755 21:25:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:59.755 21:25:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:59.755 21:25:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:59.755 21:25:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:59.755 21:25:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:59.755 21:25:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.755 21:25:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:59.755 21:25:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:59.755 21:25:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:59.755 21:25:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:59.755 21:25:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:59.755 21:25:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:59.755 Cannot find device "nvmf_tgt_br" 00:22:59.755 21:25:48 -- nvmf/common.sh@155 -- # true 00:22:59.755 21:25:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:00.015 Cannot find device "nvmf_tgt_br2" 00:23:00.015 21:25:49 -- nvmf/common.sh@156 -- # true 00:23:00.015 21:25:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:00.015 21:25:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:00.015 Cannot find device "nvmf_tgt_br" 00:23:00.015 21:25:49 -- nvmf/common.sh@158 -- # true 00:23:00.015 21:25:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:00.015 Cannot find device "nvmf_tgt_br2" 00:23:00.015 21:25:49 -- nvmf/common.sh@159 -- # true 00:23:00.015 21:25:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:00.015 21:25:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:00.015 21:25:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:00.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.015 21:25:49 -- nvmf/common.sh@162 -- # true 00:23:00.015 21:25:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:00.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:00.015 21:25:49 -- nvmf/common.sh@163 -- # true 00:23:00.015 21:25:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:00.015 21:25:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:00.015 21:25:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:00.015 21:25:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:00.015 21:25:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:00.015 21:25:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:00.015 21:25:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:00.015 21:25:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:00.015 21:25:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:00.015 21:25:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:00.015 21:25:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:00.015 21:25:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:00.015 21:25:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:00.015 21:25:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:00.015 21:25:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:00.015 21:25:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:00.015 21:25:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:00.015 21:25:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:00.015 21:25:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:00.015 21:25:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:00.015 21:25:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:00.274 21:25:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:00.274 21:25:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:00.274 21:25:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:00.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:23:00.274 00:23:00.274 --- 10.0.0.2 ping statistics --- 00:23:00.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.274 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:00.274 21:25:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:00.274 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:00.274 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:23:00.274 00:23:00.274 --- 10.0.0.3 ping statistics --- 00:23:00.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.274 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:23:00.274 21:25:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:00.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:23:00.274 00:23:00.274 --- 10.0.0.1 ping statistics --- 00:23:00.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.274 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:23:00.274 21:25:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.274 21:25:49 -- nvmf/common.sh@422 -- # return 0 00:23:00.274 21:25:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:00.274 21:25:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.274 21:25:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:00.274 21:25:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:00.274 21:25:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.274 21:25:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:00.274 21:25:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:00.274 21:25:49 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:23:00.274 21:25:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:00.274 21:25:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:00.274 21:25:49 -- common/autotest_common.sh@10 -- # set +x 00:23:00.274 ************************************ 00:23:00.274 START TEST nvmf_host_management 00:23:00.274 ************************************ 00:23:00.274 21:25:49 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:23:00.274 21:25:49 -- target/host_management.sh@69 -- # starttarget 00:23:00.274 21:25:49 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:23:00.274 21:25:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:00.274 21:25:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:00.274 21:25:49 -- common/autotest_common.sh@10 -- # set +x 00:23:00.274 21:25:49 -- nvmf/common.sh@470 -- # nvmfpid=87692 00:23:00.274 21:25:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:00.274 21:25:49 -- nvmf/common.sh@471 -- # waitforlisten 87692 00:23:00.274 21:25:49 -- common/autotest_common.sh@817 -- # '[' -z 87692 ']' 00:23:00.274 21:25:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.274 21:25:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:00.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.274 21:25:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.275 21:25:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:00.275 21:25:49 -- common/autotest_common.sh@10 -- # set +x 00:23:00.275 [2024-04-26 21:25:49.428726] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:00.275 [2024-04-26 21:25:49.428790] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.534 [2024-04-26 21:25:49.567209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.534 [2024-04-26 21:25:49.619582] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.534 [2024-04-26 21:25:49.619630] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.534 [2024-04-26 21:25:49.619636] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.534 [2024-04-26 21:25:49.619641] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.534 [2024-04-26 21:25:49.619646] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.534 [2024-04-26 21:25:49.619864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.534 [2024-04-26 21:25:49.620146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.534 [2024-04-26 21:25:49.620395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.534 [2024-04-26 21:25:49.620318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:01.101 21:25:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:01.102 21:25:50 -- common/autotest_common.sh@850 -- # return 0 00:23:01.102 21:25:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:01.102 21:25:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:01.102 21:25:50 -- common/autotest_common.sh@10 -- # set +x 00:23:01.102 21:25:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.102 21:25:50 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:01.102 21:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.102 21:25:50 -- common/autotest_common.sh@10 -- # set +x 00:23:01.360 [2024-04-26 21:25:50.360801] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.360 21:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.360 21:25:50 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:23:01.360 21:25:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:01.360 21:25:50 -- common/autotest_common.sh@10 -- # set +x 00:23:01.360 21:25:50 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:23:01.360 21:25:50 -- target/host_management.sh@23 -- # cat 00:23:01.360 21:25:50 -- target/host_management.sh@30 -- # rpc_cmd 00:23:01.360 21:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.360 21:25:50 -- common/autotest_common.sh@10 -- # set +x 00:23:01.360 Malloc0 00:23:01.360 [2024-04-26 21:25:50.438309] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.360 21:25:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.360 21:25:50 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:23:01.360 21:25:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:01.360 21:25:50 -- common/autotest_common.sh@10 -- # set +x 00:23:01.360 21:25:50 -- target/host_management.sh@73 -- # perfpid=87764 00:23:01.360 21:25:50 -- target/host_management.sh@74 -- # waitforlisten 87764 /var/tmp/bdevperf.sock 00:23:01.360 21:25:50 -- common/autotest_common.sh@817 -- # '[' -z 87764 ']' 00:23:01.360 21:25:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.360 21:25:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:01.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.360 21:25:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.360 21:25:50 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:01.360 21:25:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:01.360 21:25:50 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:23:01.360 21:25:50 -- common/autotest_common.sh@10 -- # set +x 00:23:01.360 21:25:50 -- nvmf/common.sh@521 -- # config=() 00:23:01.360 21:25:50 -- nvmf/common.sh@521 -- # local subsystem config 00:23:01.360 21:25:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:01.360 21:25:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:01.360 { 00:23:01.360 "params": { 00:23:01.360 "name": "Nvme$subsystem", 00:23:01.360 "trtype": "$TEST_TRANSPORT", 00:23:01.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.360 "adrfam": "ipv4", 00:23:01.360 "trsvcid": "$NVMF_PORT", 00:23:01.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.360 "hdgst": ${hdgst:-false}, 00:23:01.360 "ddgst": ${ddgst:-false} 00:23:01.361 }, 00:23:01.361 "method": "bdev_nvme_attach_controller" 00:23:01.361 } 00:23:01.361 EOF 00:23:01.361 )") 00:23:01.361 21:25:50 -- nvmf/common.sh@543 -- # cat 00:23:01.361 21:25:50 -- nvmf/common.sh@545 -- # jq . 00:23:01.361 21:25:50 -- nvmf/common.sh@546 -- # IFS=, 00:23:01.361 21:25:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:01.361 "params": { 00:23:01.361 "name": "Nvme0", 00:23:01.361 "trtype": "tcp", 00:23:01.361 "traddr": "10.0.0.2", 00:23:01.361 "adrfam": "ipv4", 00:23:01.361 "trsvcid": "4420", 00:23:01.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:01.361 "hdgst": false, 00:23:01.361 "ddgst": false 00:23:01.361 }, 00:23:01.361 "method": "bdev_nvme_attach_controller" 00:23:01.361 }' 00:23:01.361 [2024-04-26 21:25:50.555276] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:01.361 [2024-04-26 21:25:50.555352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87764 ] 00:23:01.619 [2024-04-26 21:25:50.690316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.619 [2024-04-26 21:25:50.751029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.878 Running I/O for 10 seconds... 00:23:02.451 21:25:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:02.451 21:25:51 -- common/autotest_common.sh@850 -- # return 0 00:23:02.451 21:25:51 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:02.451 21:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.451 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:23:02.451 21:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.451 21:25:51 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:02.451 21:25:51 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:23:02.451 21:25:51 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:02.451 21:25:51 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:23:02.451 21:25:51 -- target/host_management.sh@52 -- # local ret=1 00:23:02.451 21:25:51 -- target/host_management.sh@53 -- # local i 00:23:02.451 21:25:51 -- target/host_management.sh@54 -- # (( i = 10 )) 00:23:02.451 21:25:51 -- target/host_management.sh@54 -- # (( i != 0 )) 00:23:02.451 21:25:51 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:23:02.451 21:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.451 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:23:02.451 21:25:51 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:23:02.451 21:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.451 21:25:51 -- target/host_management.sh@55 -- # read_io_count=1027 00:23:02.451 21:25:51 -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:23:02.451 21:25:51 -- target/host_management.sh@59 -- # ret=0 00:23:02.451 21:25:51 -- target/host_management.sh@60 -- # break 00:23:02.451 21:25:51 -- target/host_management.sh@64 -- # return 0 00:23:02.451 21:25:51 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:02.451 21:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.451 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:23:02.451 [2024-04-26 21:25:51.525338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085040 is same with the state(5) to be set 00:23:02.451 [2024-04-26 21:25:51.525627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.451 [2024-04-26 21:25:51.525653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.451 [2024-04-26 21:25:51.525662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.451 [2024-04-26 21:25:51.525668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.451 [2024-04-26 21:25:51.525675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.451 [2024-04-26 21:25:51.525681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.451 [2024-04-26 21:25:51.525688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:02.451 [2024-04-26 21:25:51.525693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.525700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8d30 is same with the state(5) to be set 00:23:02.452 21:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.452 21:25:51 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:23:02.452 21:25:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.452 21:25:51 -- common/autotest_common.sh@10 -- # set +x 00:23:02.452 [2024-04-26 21:25:51.532911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.532939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.532955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.532961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.532969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.532975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.532983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.532989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.532997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.452 [2024-04-26 21:25:51.533484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.452 [2024-04-26 21:25:51.533490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.453 [2024-04-26 21:25:51.533891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:02.453 [2024-04-26 21:25:51.533957] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcf9ca0 was disconnected and freed. reset controller. 00:23:02.453 task offset: 16384 on job bdev=Nvme0n1 fails 00:23:02.453 00:23:02.453 Latency(us) 00:23:02.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.453 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:02.453 Job: Nvme0n1 ended in about 0.63 seconds with error 00:23:02.453 Verification LBA range: start 0x0 length 0x400 00:23:02.453 Nvme0n1 : 0.63 1842.00 115.13 102.33 0.00 32165.66 1395.14 29992.02 00:23:02.453 =================================================================================================================== 00:23:02.453 Total : 1842.00 115.13 102.33 0.00 32165.66 1395.14 29992.02 00:23:02.453 [2024-04-26 21:25:51.535035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:02.453 [2024-04-26 21:25:51.537139] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:02.453 [2024-04-26 21:25:51.537155] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8e8d30 (9): Bad file descriptor 00:23:02.453 21:25:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.453 21:25:51 -- target/host_management.sh@87 -- # sleep 1 00:23:02.453 [2024-04-26 21:25:51.548213] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:03.390 21:25:52 -- target/host_management.sh@91 -- # kill -9 87764 00:23:03.390 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (87764) - No such process 00:23:03.390 21:25:52 -- target/host_management.sh@91 -- # true 00:23:03.390 21:25:52 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:23:03.390 21:25:52 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:23:03.390 21:25:52 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:03.390 21:25:52 -- nvmf/common.sh@521 -- # config=() 00:23:03.390 21:25:52 -- nvmf/common.sh@521 -- # local subsystem config 00:23:03.390 21:25:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:03.390 21:25:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:03.390 { 00:23:03.390 "params": { 00:23:03.390 "name": "Nvme$subsystem", 00:23:03.390 "trtype": "$TEST_TRANSPORT", 00:23:03.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.390 "adrfam": "ipv4", 00:23:03.390 "trsvcid": "$NVMF_PORT", 00:23:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.390 "hdgst": ${hdgst:-false}, 00:23:03.390 "ddgst": ${ddgst:-false} 00:23:03.390 }, 00:23:03.390 "method": "bdev_nvme_attach_controller" 00:23:03.390 } 00:23:03.390 EOF 00:23:03.390 )") 00:23:03.390 21:25:52 -- nvmf/common.sh@543 -- # cat 00:23:03.390 21:25:52 -- nvmf/common.sh@545 -- # jq . 00:23:03.390 21:25:52 -- nvmf/common.sh@546 -- # IFS=, 00:23:03.390 21:25:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:03.390 "params": { 00:23:03.390 "name": "Nvme0", 00:23:03.390 "trtype": "tcp", 00:23:03.390 "traddr": "10.0.0.2", 00:23:03.390 "adrfam": "ipv4", 00:23:03.390 "trsvcid": "4420", 00:23:03.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:03.390 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:03.390 "hdgst": false, 00:23:03.390 "ddgst": false 00:23:03.390 }, 00:23:03.390 "method": "bdev_nvme_attach_controller" 00:23:03.390 }' 00:23:03.390 [2024-04-26 21:25:52.600598] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:03.390 [2024-04-26 21:25:52.600657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87813 ] 00:23:03.649 [2024-04-26 21:25:52.738991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.649 [2024-04-26 21:25:52.794933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.907 Running I/O for 1 seconds... 00:23:04.844 00:23:04.844 Latency(us) 00:23:04.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.844 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:04.844 Verification LBA range: start 0x0 length 0x400 00:23:04.844 Nvme0n1 : 1.01 1781.42 111.34 0.00 0.00 35267.02 4636.17 32510.43 00:23:04.844 =================================================================================================================== 00:23:04.844 Total : 1781.42 111.34 0.00 0.00 35267.02 4636.17 32510.43 00:23:05.104 21:25:54 -- target/host_management.sh@102 -- # stoptarget 00:23:05.104 21:25:54 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:23:05.104 21:25:54 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:23:05.104 21:25:54 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:23:05.104 21:25:54 -- target/host_management.sh@40 -- # nvmftestfini 00:23:05.104 21:25:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:05.104 21:25:54 -- nvmf/common.sh@117 -- # sync 00:23:05.104 21:25:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.104 21:25:54 -- nvmf/common.sh@120 -- # set +e 00:23:05.104 21:25:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.104 21:25:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.104 rmmod nvme_tcp 00:23:05.104 rmmod nvme_fabrics 00:23:05.104 rmmod nvme_keyring 00:23:05.104 21:25:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.104 21:25:54 -- nvmf/common.sh@124 -- # set -e 00:23:05.104 21:25:54 -- nvmf/common.sh@125 -- # return 0 00:23:05.104 21:25:54 -- nvmf/common.sh@478 -- # '[' -n 87692 ']' 00:23:05.104 21:25:54 -- nvmf/common.sh@479 -- # killprocess 87692 00:23:05.104 21:25:54 -- common/autotest_common.sh@936 -- # '[' -z 87692 ']' 00:23:05.104 21:25:54 -- common/autotest_common.sh@940 -- # kill -0 87692 00:23:05.104 21:25:54 -- common/autotest_common.sh@941 -- # uname 00:23:05.104 21:25:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.104 21:25:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87692 00:23:05.104 21:25:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:05.104 21:25:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:05.104 21:25:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87692' 00:23:05.104 killing process with pid 87692 00:23:05.104 21:25:54 -- common/autotest_common.sh@955 -- # kill 87692 00:23:05.104 21:25:54 -- common/autotest_common.sh@960 -- # wait 87692 00:23:05.362 [2024-04-26 21:25:54.477207] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:23:05.362 21:25:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:05.362 21:25:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:05.362 21:25:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:05.362 21:25:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.362 21:25:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.362 21:25:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.362 21:25:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.362 21:25:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.362 21:25:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:05.362 00:23:05.362 real 0m5.188s 00:23:05.362 user 0m21.878s 00:23:05.362 sys 0m1.101s 00:23:05.362 21:25:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:05.362 21:25:54 -- common/autotest_common.sh@10 -- # set +x 00:23:05.362 ************************************ 00:23:05.362 END TEST nvmf_host_management 00:23:05.362 ************************************ 00:23:05.362 21:25:54 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:05.622 ************************************ 00:23:05.622 END TEST nvmf_host_management 00:23:05.622 ************************************ 00:23:05.622 00:23:05.622 real 0m5.823s 00:23:05.622 user 0m22.035s 00:23:05.622 sys 0m1.417s 00:23:05.622 21:25:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:05.622 21:25:54 -- common/autotest_common.sh@10 -- # set +x 00:23:05.622 21:25:54 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:23:05.622 21:25:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:05.622 21:25:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:05.622 21:25:54 -- common/autotest_common.sh@10 -- # set +x 00:23:05.622 ************************************ 00:23:05.622 START TEST nvmf_lvol 00:23:05.622 ************************************ 00:23:05.622 21:25:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:23:05.622 * Looking for test storage... 00:23:05.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:05.622 21:25:54 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.622 21:25:54 -- nvmf/common.sh@7 -- # uname -s 00:23:05.622 21:25:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.622 21:25:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.622 21:25:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.622 21:25:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.622 21:25:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.622 21:25:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.622 21:25:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.622 21:25:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.622 21:25:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.622 21:25:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.622 21:25:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:23:05.622 21:25:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:23:05.622 21:25:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.622 21:25:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.622 21:25:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.622 21:25:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.622 21:25:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.622 21:25:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.622 21:25:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.622 21:25:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.622 21:25:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.622 21:25:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.622 21:25:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.622 21:25:54 -- paths/export.sh@5 -- # export PATH 00:23:05.622 21:25:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.622 21:25:54 -- nvmf/common.sh@47 -- # : 0 00:23:05.622 21:25:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.622 21:25:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.622 21:25:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.622 21:25:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.622 21:25:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.622 21:25:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.622 21:25:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.622 21:25:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.622 21:25:54 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.622 21:25:54 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.622 21:25:54 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:23:05.622 21:25:54 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:23:05.622 21:25:54 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.622 21:25:54 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:23:05.622 21:25:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:05.622 21:25:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.622 21:25:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:05.622 21:25:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:05.622 21:25:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:05.622 21:25:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.622 21:25:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.622 21:25:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.622 21:25:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:05.622 21:25:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:05.622 21:25:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:05.622 21:25:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:05.622 21:25:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:05.622 21:25:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:05.622 21:25:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.622 21:25:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.622 21:25:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:05.622 21:25:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:05.622 21:25:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:05.622 21:25:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:05.622 21:25:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:05.622 21:25:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.622 21:25:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:05.622 21:25:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:05.622 21:25:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:05.622 21:25:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:05.622 21:25:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:05.881 21:25:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:05.881 Cannot find device "nvmf_tgt_br" 00:23:05.881 21:25:54 -- nvmf/common.sh@155 -- # true 00:23:05.881 21:25:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:05.881 Cannot find device "nvmf_tgt_br2" 00:23:05.881 21:25:54 -- nvmf/common.sh@156 -- # true 00:23:05.881 21:25:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:05.881 21:25:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:05.881 Cannot find device "nvmf_tgt_br" 00:23:05.881 21:25:54 -- nvmf/common.sh@158 -- # true 00:23:05.881 21:25:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:05.881 Cannot find device "nvmf_tgt_br2" 00:23:05.881 21:25:54 -- nvmf/common.sh@159 -- # true 00:23:05.881 21:25:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:05.881 21:25:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:05.881 21:25:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.881 21:25:55 -- nvmf/common.sh@162 -- # true 00:23:05.881 21:25:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.881 21:25:55 -- nvmf/common.sh@163 -- # true 00:23:05.881 21:25:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:05.882 21:25:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:05.882 21:25:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.882 21:25:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.882 21:25:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.882 21:25:55 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.882 21:25:55 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.882 21:25:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:05.882 21:25:55 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:05.882 21:25:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:05.882 21:25:55 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:05.882 21:25:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:05.882 21:25:55 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:05.882 21:25:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.882 21:25:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.882 21:25:55 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.882 21:25:55 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:05.882 21:25:55 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:05.882 21:25:55 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:06.141 21:25:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:06.141 21:25:55 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:06.141 21:25:55 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:06.141 21:25:55 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:06.141 21:25:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:06.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:23:06.141 00:23:06.141 --- 10.0.0.2 ping statistics --- 00:23:06.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.141 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:06.141 21:25:55 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:06.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:06.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:23:06.141 00:23:06.141 --- 10.0.0.3 ping statistics --- 00:23:06.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.141 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:06.141 21:25:55 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:06.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:23:06.141 00:23:06.141 --- 10.0.0.1 ping statistics --- 00:23:06.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.141 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:23:06.141 21:25:55 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.141 21:25:55 -- nvmf/common.sh@422 -- # return 0 00:23:06.141 21:25:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:06.141 21:25:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.141 21:25:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:06.141 21:25:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:06.141 21:25:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.141 21:25:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:06.141 21:25:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:06.141 21:25:55 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:23:06.141 21:25:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:06.141 21:25:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:06.141 21:25:55 -- common/autotest_common.sh@10 -- # set +x 00:23:06.141 21:25:55 -- nvmf/common.sh@470 -- # nvmfpid=88049 00:23:06.141 21:25:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:23:06.141 21:25:55 -- nvmf/common.sh@471 -- # waitforlisten 88049 00:23:06.141 21:25:55 -- common/autotest_common.sh@817 -- # '[' -z 88049 ']' 00:23:06.141 21:25:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.141 21:25:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:06.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.141 21:25:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.141 21:25:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:06.141 21:25:55 -- common/autotest_common.sh@10 -- # set +x 00:23:06.141 [2024-04-26 21:25:55.275087] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:06.141 [2024-04-26 21:25:55.275160] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.401 [2024-04-26 21:25:55.414651] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:06.401 [2024-04-26 21:25:55.464268] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.401 [2024-04-26 21:25:55.464319] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.401 [2024-04-26 21:25:55.464326] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.401 [2024-04-26 21:25:55.464341] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.401 [2024-04-26 21:25:55.464346] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.401 [2024-04-26 21:25:55.464552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.401 [2024-04-26 21:25:55.465500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.401 [2024-04-26 21:25:55.465501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.969 21:25:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.969 21:25:56 -- common/autotest_common.sh@850 -- # return 0 00:23:06.969 21:25:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:06.969 21:25:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:06.969 21:25:56 -- common/autotest_common.sh@10 -- # set +x 00:23:06.969 21:25:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.969 21:25:56 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:07.228 [2024-04-26 21:25:56.360232] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.228 21:25:56 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:07.487 21:25:56 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:23:07.487 21:25:56 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:07.747 21:25:56 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:23:07.747 21:25:56 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:23:08.007 21:25:57 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:23:08.266 21:25:57 -- target/nvmf_lvol.sh@29 -- # lvs=ba6fa170-3f45-47cf-815a-253b59c44834 00:23:08.266 21:25:57 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ba6fa170-3f45-47cf-815a-253b59c44834 lvol 20 00:23:08.525 21:25:57 -- target/nvmf_lvol.sh@32 -- # lvol=fa652b3a-3036-4ffc-af20-59ccfe6b4a63 00:23:08.525 21:25:57 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:08.525 21:25:57 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fa652b3a-3036-4ffc-af20-59ccfe6b4a63 00:23:08.784 21:25:57 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:09.044 [2024-04-26 21:25:58.155596] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.044 21:25:58 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:09.304 21:25:58 -- target/nvmf_lvol.sh@42 -- # perf_pid=88192 00:23:09.304 21:25:58 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:23:09.304 21:25:58 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:23:10.242 21:25:59 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot fa652b3a-3036-4ffc-af20-59ccfe6b4a63 MY_SNAPSHOT 00:23:10.502 21:25:59 -- target/nvmf_lvol.sh@47 -- # snapshot=caf90726-3b68-4ddb-8509-e93c73c1daf6 00:23:10.502 21:25:59 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize fa652b3a-3036-4ffc-af20-59ccfe6b4a63 30 00:23:10.761 21:25:59 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone caf90726-3b68-4ddb-8509-e93c73c1daf6 MY_CLONE 00:23:11.020 21:26:00 -- target/nvmf_lvol.sh@49 -- # clone=0fe49138-1ee5-4f43-aa3b-6e30198c2711 00:23:11.020 21:26:00 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0fe49138-1ee5-4f43-aa3b-6e30198c2711 00:23:11.955 21:26:00 -- target/nvmf_lvol.sh@53 -- # wait 88192 00:23:20.076 Initializing NVMe Controllers 00:23:20.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:23:20.076 Controller IO queue size 128, less than required. 00:23:20.076 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:20.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:23:20.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:23:20.076 Initialization complete. Launching workers. 00:23:20.076 ======================================================== 00:23:20.076 Latency(us) 00:23:20.076 Device Information : IOPS MiB/s Average min max 00:23:20.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10286.90 40.18 12446.40 2092.44 56615.68 00:23:20.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10470.00 40.90 12233.04 4527.68 88305.67 00:23:20.076 ======================================================== 00:23:20.076 Total : 20756.90 81.08 12338.78 2092.44 88305.67 00:23:20.076 00:23:20.076 21:26:08 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:20.076 21:26:08 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete fa652b3a-3036-4ffc-af20-59ccfe6b4a63 00:23:20.076 21:26:09 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba6fa170-3f45-47cf-815a-253b59c44834 00:23:20.335 21:26:09 -- target/nvmf_lvol.sh@60 -- # rm -f 00:23:20.335 21:26:09 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:23:20.335 21:26:09 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:23:20.335 21:26:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:20.335 21:26:09 -- nvmf/common.sh@117 -- # sync 00:23:20.594 21:26:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:20.594 21:26:09 -- nvmf/common.sh@120 -- # set +e 00:23:20.594 21:26:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.594 21:26:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:20.594 rmmod nvme_tcp 00:23:20.594 rmmod nvme_fabrics 00:23:20.595 rmmod nvme_keyring 00:23:20.595 21:26:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.595 21:26:09 -- nvmf/common.sh@124 -- # set -e 00:23:20.595 21:26:09 -- nvmf/common.sh@125 -- # return 0 00:23:20.595 21:26:09 -- nvmf/common.sh@478 -- # '[' -n 88049 ']' 00:23:20.595 21:26:09 -- nvmf/common.sh@479 -- # killprocess 88049 00:23:20.595 21:26:09 -- common/autotest_common.sh@936 -- # '[' -z 88049 ']' 00:23:20.595 21:26:09 -- common/autotest_common.sh@940 -- # kill -0 88049 00:23:20.595 21:26:09 -- common/autotest_common.sh@941 -- # uname 00:23:20.595 21:26:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.595 21:26:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88049 00:23:20.595 killing process with pid 88049 00:23:20.595 21:26:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:20.595 21:26:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:20.595 21:26:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88049' 00:23:20.595 21:26:09 -- common/autotest_common.sh@955 -- # kill 88049 00:23:20.595 21:26:09 -- common/autotest_common.sh@960 -- # wait 88049 00:23:20.854 21:26:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:20.854 21:26:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:20.854 21:26:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:20.854 21:26:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.854 21:26:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.854 21:26:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.854 21:26:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.854 21:26:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.854 21:26:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:20.854 ************************************ 00:23:20.854 END TEST nvmf_lvol 00:23:20.854 ************************************ 00:23:20.854 00:23:20.854 real 0m15.300s 00:23:20.854 user 1m5.000s 00:23:20.854 sys 0m2.841s 00:23:20.854 21:26:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:20.854 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:23:20.854 21:26:10 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:20.854 21:26:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:20.854 21:26:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:20.854 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:23:21.114 ************************************ 00:23:21.114 START TEST nvmf_lvs_grow 00:23:21.114 ************************************ 00:23:21.114 21:26:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:23:21.114 * Looking for test storage... 00:23:21.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:21.114 21:26:10 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:21.114 21:26:10 -- nvmf/common.sh@7 -- # uname -s 00:23:21.114 21:26:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.114 21:26:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.114 21:26:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.114 21:26:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.114 21:26:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.114 21:26:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.114 21:26:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.114 21:26:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.114 21:26:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.114 21:26:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.114 21:26:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:23:21.114 21:26:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:23:21.114 21:26:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.114 21:26:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.114 21:26:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:21.114 21:26:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.114 21:26:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:21.114 21:26:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.114 21:26:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.114 21:26:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.114 21:26:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.114 21:26:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.114 21:26:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.114 21:26:10 -- paths/export.sh@5 -- # export PATH 00:23:21.114 21:26:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.114 21:26:10 -- nvmf/common.sh@47 -- # : 0 00:23:21.114 21:26:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.114 21:26:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.114 21:26:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.114 21:26:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.114 21:26:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.114 21:26:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.114 21:26:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.114 21:26:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.114 21:26:10 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:21.114 21:26:10 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:21.114 21:26:10 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:23:21.114 21:26:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:21.114 21:26:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.114 21:26:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:21.114 21:26:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:21.114 21:26:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:21.114 21:26:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.114 21:26:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.114 21:26:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.114 21:26:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:21.114 21:26:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:21.114 21:26:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:21.114 21:26:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:21.114 21:26:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:21.114 21:26:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:21.114 21:26:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.114 21:26:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.114 21:26:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:21.114 21:26:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:21.114 21:26:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:21.114 21:26:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:21.114 21:26:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:21.114 21:26:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.114 21:26:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:21.114 21:26:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:21.114 21:26:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:21.114 21:26:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:21.114 21:26:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:21.114 21:26:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:21.114 Cannot find device "nvmf_tgt_br" 00:23:21.114 21:26:10 -- nvmf/common.sh@155 -- # true 00:23:21.114 21:26:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:21.114 Cannot find device "nvmf_tgt_br2" 00:23:21.114 21:26:10 -- nvmf/common.sh@156 -- # true 00:23:21.114 21:26:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:21.114 21:26:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:21.114 Cannot find device "nvmf_tgt_br" 00:23:21.114 21:26:10 -- nvmf/common.sh@158 -- # true 00:23:21.114 21:26:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:21.373 Cannot find device "nvmf_tgt_br2" 00:23:21.373 21:26:10 -- nvmf/common.sh@159 -- # true 00:23:21.373 21:26:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:21.373 21:26:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:21.373 21:26:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:21.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.373 21:26:10 -- nvmf/common.sh@162 -- # true 00:23:21.373 21:26:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:21.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.373 21:26:10 -- nvmf/common.sh@163 -- # true 00:23:21.373 21:26:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:21.373 21:26:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:21.373 21:26:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:21.373 21:26:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:21.374 21:26:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:21.374 21:26:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:21.374 21:26:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:21.374 21:26:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:21.374 21:26:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:21.374 21:26:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:21.374 21:26:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:21.374 21:26:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:21.374 21:26:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:21.374 21:26:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:21.374 21:26:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:21.374 21:26:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:21.374 21:26:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:21.374 21:26:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:21.374 21:26:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:21.374 21:26:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:21.374 21:26:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:21.374 21:26:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:21.374 21:26:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:21.374 21:26:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:21.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:21.374 00:23:21.374 --- 10.0.0.2 ping statistics --- 00:23:21.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.374 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:21.374 21:26:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:21.374 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:21.374 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:23:21.374 00:23:21.374 --- 10.0.0.3 ping statistics --- 00:23:21.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.374 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:21.374 21:26:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:21.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:21.374 00:23:21.374 --- 10.0.0.1 ping statistics --- 00:23:21.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.374 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:21.374 21:26:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.374 21:26:10 -- nvmf/common.sh@422 -- # return 0 00:23:21.374 21:26:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:21.374 21:26:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.374 21:26:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:21.374 21:26:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:21.374 21:26:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.374 21:26:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:21.374 21:26:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:21.374 21:26:10 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:23:21.374 21:26:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:21.374 21:26:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:21.374 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:23:21.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.374 21:26:10 -- nvmf/common.sh@470 -- # nvmfpid=88568 00:23:21.374 21:26:10 -- nvmf/common.sh@471 -- # waitforlisten 88568 00:23:21.374 21:26:10 -- common/autotest_common.sh@817 -- # '[' -z 88568 ']' 00:23:21.374 21:26:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.374 21:26:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:21.374 21:26:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:21.374 21:26:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.374 21:26:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:21.374 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:23:21.634 [2024-04-26 21:26:10.665893] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:21.634 [2024-04-26 21:26:10.665966] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.634 [2024-04-26 21:26:10.798247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.634 [2024-04-26 21:26:10.850435] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.634 [2024-04-26 21:26:10.850600] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.634 [2024-04-26 21:26:10.850646] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.634 [2024-04-26 21:26:10.850689] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.634 [2024-04-26 21:26:10.850715] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.634 [2024-04-26 21:26:10.850773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.893 21:26:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:21.893 21:26:10 -- common/autotest_common.sh@850 -- # return 0 00:23:21.893 21:26:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:21.893 21:26:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:21.893 21:26:10 -- common/autotest_common.sh@10 -- # set +x 00:23:21.893 21:26:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.893 21:26:11 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:22.153 [2024-04-26 21:26:11.211808] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:23:22.153 21:26:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:22.153 21:26:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:22.153 21:26:11 -- common/autotest_common.sh@10 -- # set +x 00:23:22.153 ************************************ 00:23:22.153 START TEST lvs_grow_clean 00:23:22.153 ************************************ 00:23:22.153 21:26:11 -- common/autotest_common.sh@1111 -- # lvs_grow 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:22.153 21:26:11 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:22.413 21:26:11 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:22.413 21:26:11 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:22.672 21:26:11 -- target/nvmf_lvs_grow.sh@28 -- # lvs=ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:22.672 21:26:11 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:22.672 21:26:11 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:22.931 21:26:12 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:22.931 21:26:12 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:22.931 21:26:12 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ac70554f-cde4-4aef-a9e9-36397ea5e53b lvol 150 00:23:23.190 21:26:12 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ef63699b-fb08-4162-8a83-640ac9799318 00:23:23.190 21:26:12 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:23.190 21:26:12 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:23.450 [2024-04-26 21:26:12.519177] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:23.450 [2024-04-26 21:26:12.519240] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:23.450 true 00:23:23.450 21:26:12 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:23.450 21:26:12 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:23.708 21:26:12 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:23.708 21:26:12 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:23.968 21:26:13 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ef63699b-fb08-4162-8a83-640ac9799318 00:23:24.226 21:26:13 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:24.226 [2024-04-26 21:26:13.414236] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.226 21:26:13 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:24.484 21:26:13 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=88717 00:23:24.484 21:26:13 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:24.484 21:26:13 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.484 21:26:13 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 88717 /var/tmp/bdevperf.sock 00:23:24.484 21:26:13 -- common/autotest_common.sh@817 -- # '[' -z 88717 ']' 00:23:24.484 21:26:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.484 21:26:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:24.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.484 21:26:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.484 21:26:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:24.484 21:26:13 -- common/autotest_common.sh@10 -- # set +x 00:23:24.484 [2024-04-26 21:26:13.700866] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:24.484 [2024-04-26 21:26:13.700939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88717 ] 00:23:24.743 [2024-04-26 21:26:13.841042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.743 [2024-04-26 21:26:13.894363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.679 21:26:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:25.679 21:26:14 -- common/autotest_common.sh@850 -- # return 0 00:23:25.680 21:26:14 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:25.939 Nvme0n1 00:23:25.939 21:26:14 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:26.198 [ 00:23:26.198 { 00:23:26.198 "aliases": [ 00:23:26.198 "ef63699b-fb08-4162-8a83-640ac9799318" 00:23:26.198 ], 00:23:26.198 "assigned_rate_limits": { 00:23:26.198 "r_mbytes_per_sec": 0, 00:23:26.198 "rw_ios_per_sec": 0, 00:23:26.198 "rw_mbytes_per_sec": 0, 00:23:26.198 "w_mbytes_per_sec": 0 00:23:26.198 }, 00:23:26.198 "block_size": 4096, 00:23:26.198 "claimed": false, 00:23:26.198 "driver_specific": { 00:23:26.198 "mp_policy": "active_passive", 00:23:26.198 "nvme": [ 00:23:26.198 { 00:23:26.198 "ctrlr_data": { 00:23:26.198 "ana_reporting": false, 00:23:26.198 "cntlid": 1, 00:23:26.198 "firmware_revision": "24.05", 00:23:26.198 "model_number": "SPDK bdev Controller", 00:23:26.198 "multi_ctrlr": true, 00:23:26.198 "oacs": { 00:23:26.198 "firmware": 0, 00:23:26.198 "format": 0, 00:23:26.198 "ns_manage": 0, 00:23:26.198 "security": 0 00:23:26.198 }, 00:23:26.198 "serial_number": "SPDK0", 00:23:26.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.198 "vendor_id": "0x8086" 00:23:26.198 }, 00:23:26.198 "ns_data": { 00:23:26.198 "can_share": true, 00:23:26.198 "id": 1 00:23:26.198 }, 00:23:26.198 "trid": { 00:23:26.198 "adrfam": "IPv4", 00:23:26.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:26.198 "traddr": "10.0.0.2", 00:23:26.198 "trsvcid": "4420", 00:23:26.198 "trtype": "TCP" 00:23:26.198 }, 00:23:26.198 "vs": { 00:23:26.198 "nvme_version": "1.3" 00:23:26.198 } 00:23:26.198 } 00:23:26.198 ] 00:23:26.198 }, 00:23:26.198 "memory_domains": [ 00:23:26.198 { 00:23:26.198 "dma_device_id": "system", 00:23:26.198 "dma_device_type": 1 00:23:26.198 } 00:23:26.198 ], 00:23:26.198 "name": "Nvme0n1", 00:23:26.198 "num_blocks": 38912, 00:23:26.198 "product_name": "NVMe disk", 00:23:26.198 "supported_io_types": { 00:23:26.198 "abort": true, 00:23:26.198 "compare": true, 00:23:26.198 "compare_and_write": true, 00:23:26.198 "flush": true, 00:23:26.198 "nvme_admin": true, 00:23:26.198 "nvme_io": true, 00:23:26.198 "read": true, 00:23:26.198 "reset": true, 00:23:26.198 "unmap": true, 00:23:26.198 "write": true, 00:23:26.198 "write_zeroes": true 00:23:26.198 }, 00:23:26.198 "uuid": "ef63699b-fb08-4162-8a83-640ac9799318", 00:23:26.198 "zoned": false 00:23:26.198 } 00:23:26.198 ] 00:23:26.198 21:26:15 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=88765 00:23:26.198 21:26:15 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.198 21:26:15 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:26.198 Running I/O for 10 seconds... 00:23:27.136 Latency(us) 00:23:27.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:27.136 Nvme0n1 : 1.00 9856.00 38.50 0.00 0.00 0.00 0.00 0.00 00:23:27.136 =================================================================================================================== 00:23:27.136 Total : 9856.00 38.50 0.00 0.00 0.00 0.00 0.00 00:23:27.136 00:23:28.072 21:26:17 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:28.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:28.072 Nvme0n1 : 2.00 9894.00 38.65 0.00 0.00 0.00 0.00 0.00 00:23:28.072 =================================================================================================================== 00:23:28.072 Total : 9894.00 38.65 0.00 0.00 0.00 0.00 0.00 00:23:28.072 00:23:28.331 true 00:23:28.331 21:26:17 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:28.331 21:26:17 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:28.590 21:26:17 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:28.590 21:26:17 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:28.590 21:26:17 -- target/nvmf_lvs_grow.sh@65 -- # wait 88765 00:23:29.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:29.158 Nvme0n1 : 3.00 9926.67 38.78 0.00 0.00 0.00 0.00 0.00 00:23:29.158 =================================================================================================================== 00:23:29.158 Total : 9926.67 38.78 0.00 0.00 0.00 0.00 0.00 00:23:29.158 00:23:30.094 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:30.094 Nvme0n1 : 4.00 9910.25 38.71 0.00 0.00 0.00 0.00 0.00 00:23:30.094 =================================================================================================================== 00:23:30.094 Total : 9910.25 38.71 0.00 0.00 0.00 0.00 0.00 00:23:30.094 00:23:31.089 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:31.089 Nvme0n1 : 5.00 9880.40 38.60 0.00 0.00 0.00 0.00 0.00 00:23:31.089 =================================================================================================================== 00:23:31.089 Total : 9880.40 38.60 0.00 0.00 0.00 0.00 0.00 00:23:31.089 00:23:32.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:32.465 Nvme0n1 : 6.00 9866.67 38.54 0.00 0.00 0.00 0.00 0.00 00:23:32.465 =================================================================================================================== 00:23:32.465 Total : 9866.67 38.54 0.00 0.00 0.00 0.00 0.00 00:23:32.465 00:23:33.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:33.398 Nvme0n1 : 7.00 9842.29 38.45 0.00 0.00 0.00 0.00 0.00 00:23:33.398 =================================================================================================================== 00:23:33.398 Total : 9842.29 38.45 0.00 0.00 0.00 0.00 0.00 00:23:33.398 00:23:34.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:34.336 Nvme0n1 : 8.00 9808.12 38.31 0.00 0.00 0.00 0.00 0.00 00:23:34.336 =================================================================================================================== 00:23:34.336 Total : 9808.12 38.31 0.00 0.00 0.00 0.00 0.00 00:23:34.336 00:23:35.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:35.275 Nvme0n1 : 9.00 9794.89 38.26 0.00 0.00 0.00 0.00 0.00 00:23:35.275 =================================================================================================================== 00:23:35.275 Total : 9794.89 38.26 0.00 0.00 0.00 0.00 0.00 00:23:35.275 00:23:36.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:36.224 Nvme0n1 : 10.00 9779.50 38.20 0.00 0.00 0.00 0.00 0.00 00:23:36.224 =================================================================================================================== 00:23:36.224 Total : 9779.50 38.20 0.00 0.00 0.00 0.00 0.00 00:23:36.224 00:23:36.224 00:23:36.224 Latency(us) 00:23:36.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:36.224 Nvme0n1 : 10.01 9784.38 38.22 0.00 0.00 13075.41 4206.90 27130.19 00:23:36.224 =================================================================================================================== 00:23:36.224 Total : 9784.38 38.22 0.00 0.00 13075.41 4206.90 27130.19 00:23:36.224 0 00:23:36.224 21:26:25 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 88717 00:23:36.224 21:26:25 -- common/autotest_common.sh@936 -- # '[' -z 88717 ']' 00:23:36.224 21:26:25 -- common/autotest_common.sh@940 -- # kill -0 88717 00:23:36.224 21:26:25 -- common/autotest_common.sh@941 -- # uname 00:23:36.224 21:26:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:36.224 21:26:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88717 00:23:36.224 21:26:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:36.224 21:26:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:36.224 killing process with pid 88717 00:23:36.224 21:26:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88717' 00:23:36.224 21:26:25 -- common/autotest_common.sh@955 -- # kill 88717 00:23:36.224 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.224 00:23:36.224 Latency(us) 00:23:36.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.224 =================================================================================================================== 00:23:36.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.224 21:26:25 -- common/autotest_common.sh@960 -- # wait 88717 00:23:36.494 21:26:25 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:36.762 21:26:25 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:36.762 21:26:25 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:23:36.762 21:26:25 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:23:36.762 21:26:25 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:23:36.762 21:26:25 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:37.043 [2024-04-26 21:26:26.210891] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:23:37.044 21:26:26 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:37.044 21:26:26 -- common/autotest_common.sh@638 -- # local es=0 00:23:37.044 21:26:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:37.044 21:26:26 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.044 21:26:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:37.044 21:26:26 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.044 21:26:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:37.044 21:26:26 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.044 21:26:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:37.044 21:26:26 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:37.044 21:26:26 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:37.044 21:26:26 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:37.322 2024/04/26 21:26:26 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ac70554f-cde4-4aef-a9e9-36397ea5e53b], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:23:37.322 request: 00:23:37.322 { 00:23:37.322 "method": "bdev_lvol_get_lvstores", 00:23:37.322 "params": { 00:23:37.322 "uuid": "ac70554f-cde4-4aef-a9e9-36397ea5e53b" 00:23:37.322 } 00:23:37.322 } 00:23:37.322 Got JSON-RPC error response 00:23:37.322 GoRPCClient: error on JSON-RPC call 00:23:37.322 21:26:26 -- common/autotest_common.sh@641 -- # es=1 00:23:37.322 21:26:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:37.322 21:26:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:37.322 21:26:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:37.322 21:26:26 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:37.596 aio_bdev 00:23:37.596 21:26:26 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ef63699b-fb08-4162-8a83-640ac9799318 00:23:37.596 21:26:26 -- common/autotest_common.sh@885 -- # local bdev_name=ef63699b-fb08-4162-8a83-640ac9799318 00:23:37.596 21:26:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:37.596 21:26:26 -- common/autotest_common.sh@887 -- # local i 00:23:37.596 21:26:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:37.596 21:26:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:37.596 21:26:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:37.858 21:26:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ef63699b-fb08-4162-8a83-640ac9799318 -t 2000 00:23:38.136 [ 00:23:38.136 { 00:23:38.136 "aliases": [ 00:23:38.136 "lvs/lvol" 00:23:38.136 ], 00:23:38.136 "assigned_rate_limits": { 00:23:38.136 "r_mbytes_per_sec": 0, 00:23:38.136 "rw_ios_per_sec": 0, 00:23:38.136 "rw_mbytes_per_sec": 0, 00:23:38.136 "w_mbytes_per_sec": 0 00:23:38.136 }, 00:23:38.136 "block_size": 4096, 00:23:38.136 "claimed": false, 00:23:38.136 "driver_specific": { 00:23:38.136 "lvol": { 00:23:38.136 "base_bdev": "aio_bdev", 00:23:38.136 "clone": false, 00:23:38.136 "esnap_clone": false, 00:23:38.136 "lvol_store_uuid": "ac70554f-cde4-4aef-a9e9-36397ea5e53b", 00:23:38.136 "snapshot": false, 00:23:38.136 "thin_provision": false 00:23:38.136 } 00:23:38.136 }, 00:23:38.136 "name": "ef63699b-fb08-4162-8a83-640ac9799318", 00:23:38.136 "num_blocks": 38912, 00:23:38.136 "product_name": "Logical Volume", 00:23:38.136 "supported_io_types": { 00:23:38.136 "abort": false, 00:23:38.136 "compare": false, 00:23:38.136 "compare_and_write": false, 00:23:38.136 "flush": false, 00:23:38.136 "nvme_admin": false, 00:23:38.136 "nvme_io": false, 00:23:38.136 "read": true, 00:23:38.136 "reset": true, 00:23:38.136 "unmap": true, 00:23:38.136 "write": true, 00:23:38.136 "write_zeroes": true 00:23:38.136 }, 00:23:38.136 "uuid": "ef63699b-fb08-4162-8a83-640ac9799318", 00:23:38.136 "zoned": false 00:23:38.136 } 00:23:38.136 ] 00:23:38.136 21:26:27 -- common/autotest_common.sh@893 -- # return 0 00:23:38.136 21:26:27 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:38.136 21:26:27 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:23:38.446 21:26:27 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:23:38.446 21:26:27 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:38.446 21:26:27 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:23:38.446 21:26:27 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:23:38.446 21:26:27 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ef63699b-fb08-4162-8a83-640ac9799318 00:23:38.704 21:26:27 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac70554f-cde4-4aef-a9e9-36397ea5e53b 00:23:38.963 21:26:28 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:39.222 21:26:28 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:39.790 ************************************ 00:23:39.790 END TEST lvs_grow_clean 00:23:39.790 ************************************ 00:23:39.790 00:23:39.790 real 0m17.442s 00:23:39.790 user 0m16.759s 00:23:39.790 sys 0m2.025s 00:23:39.790 21:26:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:39.790 21:26:28 -- common/autotest_common.sh@10 -- # set +x 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:23:39.790 21:26:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:39.790 21:26:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:39.790 21:26:28 -- common/autotest_common.sh@10 -- # set +x 00:23:39.790 ************************************ 00:23:39.790 START TEST lvs_grow_dirty 00:23:39.790 ************************************ 00:23:39.790 21:26:28 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:39.790 21:26:28 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:40.048 21:26:29 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:23:40.048 21:26:29 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:23:40.307 21:26:29 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:40.307 21:26:29 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:23:40.307 21:26:29 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:40.565 21:26:29 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:23:40.565 21:26:29 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:23:40.565 21:26:29 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f lvol 150 00:23:40.823 21:26:29 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ad011a58-4a51-47e3-9c56-a5985f7634bf 00:23:40.823 21:26:29 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:40.823 21:26:29 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:23:40.823 [2024-04-26 21:26:30.038350] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:23:40.823 [2024-04-26 21:26:30.038428] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:23:40.823 true 00:23:40.823 21:26:30 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:40.823 21:26:30 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:23:41.081 21:26:30 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:23:41.081 21:26:30 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:41.339 21:26:30 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ad011a58-4a51-47e3-9c56-a5985f7634bf 00:23:41.599 21:26:30 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:41.859 21:26:30 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:42.118 21:26:31 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89151 00:23:42.118 21:26:31 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:23:42.118 21:26:31 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:42.118 21:26:31 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89151 /var/tmp/bdevperf.sock 00:23:42.118 21:26:31 -- common/autotest_common.sh@817 -- # '[' -z 89151 ']' 00:23:42.118 21:26:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.118 21:26:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:42.118 21:26:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.118 21:26:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:42.118 21:26:31 -- common/autotest_common.sh@10 -- # set +x 00:23:42.118 [2024-04-26 21:26:31.218812] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:42.118 [2024-04-26 21:26:31.218887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89151 ] 00:23:42.118 [2024-04-26 21:26:31.347133] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.376 [2024-04-26 21:26:31.420492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.958 21:26:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:42.958 21:26:32 -- common/autotest_common.sh@850 -- # return 0 00:23:42.958 21:26:32 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:23:43.216 Nvme0n1 00:23:43.216 21:26:32 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:23:43.475 [ 00:23:43.475 { 00:23:43.475 "aliases": [ 00:23:43.475 "ad011a58-4a51-47e3-9c56-a5985f7634bf" 00:23:43.475 ], 00:23:43.475 "assigned_rate_limits": { 00:23:43.475 "r_mbytes_per_sec": 0, 00:23:43.475 "rw_ios_per_sec": 0, 00:23:43.475 "rw_mbytes_per_sec": 0, 00:23:43.475 "w_mbytes_per_sec": 0 00:23:43.475 }, 00:23:43.475 "block_size": 4096, 00:23:43.475 "claimed": false, 00:23:43.475 "driver_specific": { 00:23:43.475 "mp_policy": "active_passive", 00:23:43.475 "nvme": [ 00:23:43.475 { 00:23:43.475 "ctrlr_data": { 00:23:43.475 "ana_reporting": false, 00:23:43.475 "cntlid": 1, 00:23:43.475 "firmware_revision": "24.05", 00:23:43.475 "model_number": "SPDK bdev Controller", 00:23:43.475 "multi_ctrlr": true, 00:23:43.475 "oacs": { 00:23:43.475 "firmware": 0, 00:23:43.475 "format": 0, 00:23:43.475 "ns_manage": 0, 00:23:43.475 "security": 0 00:23:43.475 }, 00:23:43.475 "serial_number": "SPDK0", 00:23:43.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.475 "vendor_id": "0x8086" 00:23:43.475 }, 00:23:43.475 "ns_data": { 00:23:43.475 "can_share": true, 00:23:43.475 "id": 1 00:23:43.475 }, 00:23:43.475 "trid": { 00:23:43.475 "adrfam": "IPv4", 00:23:43.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:43.475 "traddr": "10.0.0.2", 00:23:43.475 "trsvcid": "4420", 00:23:43.475 "trtype": "TCP" 00:23:43.475 }, 00:23:43.475 "vs": { 00:23:43.475 "nvme_version": "1.3" 00:23:43.475 } 00:23:43.475 } 00:23:43.475 ] 00:23:43.475 }, 00:23:43.475 "memory_domains": [ 00:23:43.475 { 00:23:43.475 "dma_device_id": "system", 00:23:43.475 "dma_device_type": 1 00:23:43.475 } 00:23:43.476 ], 00:23:43.476 "name": "Nvme0n1", 00:23:43.476 "num_blocks": 38912, 00:23:43.476 "product_name": "NVMe disk", 00:23:43.476 "supported_io_types": { 00:23:43.476 "abort": true, 00:23:43.476 "compare": true, 00:23:43.476 "compare_and_write": true, 00:23:43.476 "flush": true, 00:23:43.476 "nvme_admin": true, 00:23:43.476 "nvme_io": true, 00:23:43.476 "read": true, 00:23:43.476 "reset": true, 00:23:43.476 "unmap": true, 00:23:43.476 "write": true, 00:23:43.476 "write_zeroes": true 00:23:43.476 }, 00:23:43.476 "uuid": "ad011a58-4a51-47e3-9c56-a5985f7634bf", 00:23:43.476 "zoned": false 00:23:43.476 } 00:23:43.476 ] 00:23:43.476 21:26:32 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89193 00:23:43.476 21:26:32 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:43.476 21:26:32 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:23:43.734 Running I/O for 10 seconds... 00:23:44.668 Latency(us) 00:23:44.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:44.668 Nvme0n1 : 1.00 10008.00 39.09 0.00 0.00 0.00 0.00 0.00 00:23:44.668 =================================================================================================================== 00:23:44.669 Total : 10008.00 39.09 0.00 0.00 0.00 0.00 0.00 00:23:44.669 00:23:45.661 21:26:34 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:45.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:45.661 Nvme0n1 : 2.00 9947.50 38.86 0.00 0.00 0.00 0.00 0.00 00:23:45.661 =================================================================================================================== 00:23:45.661 Total : 9947.50 38.86 0.00 0.00 0.00 0.00 0.00 00:23:45.661 00:23:45.661 true 00:23:45.661 21:26:34 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:45.661 21:26:34 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:23:45.920 21:26:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:23:45.920 21:26:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:23:45.920 21:26:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 89193 00:23:46.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:46.856 Nvme0n1 : 3.00 9932.67 38.80 0.00 0.00 0.00 0.00 0.00 00:23:46.856 =================================================================================================================== 00:23:46.856 Total : 9932.67 38.80 0.00 0.00 0.00 0.00 0.00 00:23:46.856 00:23:47.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:47.792 Nvme0n1 : 4.00 9685.50 37.83 0.00 0.00 0.00 0.00 0.00 00:23:47.792 =================================================================================================================== 00:23:47.792 Total : 9685.50 37.83 0.00 0.00 0.00 0.00 0.00 00:23:47.792 00:23:48.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:48.729 Nvme0n1 : 5.00 9070.00 35.43 0.00 0.00 0.00 0.00 0.00 00:23:48.729 =================================================================================================================== 00:23:48.729 Total : 9070.00 35.43 0.00 0.00 0.00 0.00 0.00 00:23:48.729 00:23:49.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:49.666 Nvme0n1 : 6.00 9071.00 35.43 0.00 0.00 0.00 0.00 0.00 00:23:49.666 =================================================================================================================== 00:23:49.666 Total : 9071.00 35.43 0.00 0.00 0.00 0.00 0.00 00:23:49.666 00:23:50.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:50.603 Nvme0n1 : 7.00 9146.86 35.73 0.00 0.00 0.00 0.00 0.00 00:23:50.603 =================================================================================================================== 00:23:50.603 Total : 9146.86 35.73 0.00 0.00 0.00 0.00 0.00 00:23:50.603 00:23:51.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:51.539 Nvme0n1 : 8.00 9206.50 35.96 0.00 0.00 0.00 0.00 0.00 00:23:51.539 =================================================================================================================== 00:23:51.539 Total : 9206.50 35.96 0.00 0.00 0.00 0.00 0.00 00:23:51.539 00:23:52.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:52.504 Nvme0n1 : 9.00 8887.56 34.72 0.00 0.00 0.00 0.00 0.00 00:23:52.504 =================================================================================================================== 00:23:52.504 Total : 8887.56 34.72 0.00 0.00 0.00 0.00 0.00 00:23:52.504 00:23:53.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:53.880 Nvme0n1 : 10.00 8942.10 34.93 0.00 0.00 0.00 0.00 0.00 00:23:53.880 =================================================================================================================== 00:23:53.880 Total : 8942.10 34.93 0.00 0.00 0.00 0.00 0.00 00:23:53.880 00:23:53.880 00:23:53.880 Latency(us) 00:23:53.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:53.880 Nvme0n1 : 10.01 8945.40 34.94 0.00 0.00 14300.79 4235.51 366314.76 00:23:53.880 =================================================================================================================== 00:23:53.880 Total : 8945.40 34.94 0.00 0.00 14300.79 4235.51 366314.76 00:23:53.880 0 00:23:53.880 21:26:42 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89151 00:23:53.880 21:26:42 -- common/autotest_common.sh@936 -- # '[' -z 89151 ']' 00:23:53.880 21:26:42 -- common/autotest_common.sh@940 -- # kill -0 89151 00:23:53.880 21:26:42 -- common/autotest_common.sh@941 -- # uname 00:23:53.880 21:26:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:53.880 21:26:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89151 00:23:53.880 21:26:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:53.880 21:26:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:53.880 21:26:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89151' 00:23:53.880 killing process with pid 89151 00:23:53.880 21:26:42 -- common/autotest_common.sh@955 -- # kill 89151 00:23:53.880 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.880 00:23:53.880 Latency(us) 00:23:53.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.880 =================================================================================================================== 00:23:53.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.880 21:26:42 -- common/autotest_common.sh@960 -- # wait 89151 00:23:53.880 21:26:43 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:54.138 21:26:43 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:54.138 21:26:43 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:23:54.396 21:26:43 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:23:54.396 21:26:43 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:23:54.396 21:26:43 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 88568 00:23:54.396 21:26:43 -- target/nvmf_lvs_grow.sh@74 -- # wait 88568 00:23:54.396 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 88568 Killed "${NVMF_APP[@]}" "$@" 00:23:54.396 21:26:43 -- target/nvmf_lvs_grow.sh@74 -- # true 00:23:54.396 21:26:43 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:23:54.396 21:26:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:54.396 21:26:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:54.396 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:54.396 21:26:43 -- nvmf/common.sh@470 -- # nvmfpid=89349 00:23:54.396 21:26:43 -- nvmf/common.sh@471 -- # waitforlisten 89349 00:23:54.396 21:26:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:54.396 21:26:43 -- common/autotest_common.sh@817 -- # '[' -z 89349 ']' 00:23:54.396 21:26:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.396 21:26:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:54.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.396 21:26:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.396 21:26:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:54.396 21:26:43 -- common/autotest_common.sh@10 -- # set +x 00:23:54.396 [2024-04-26 21:26:43.558333] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:54.396 [2024-04-26 21:26:43.558446] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.765 [2024-04-26 21:26:43.705365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.765 [2024-04-26 21:26:43.757128] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.765 [2024-04-26 21:26:43.757190] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.765 [2024-04-26 21:26:43.757197] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.765 [2024-04-26 21:26:43.757203] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.765 [2024-04-26 21:26:43.757208] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.765 [2024-04-26 21:26:43.757233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.332 21:26:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:55.332 21:26:44 -- common/autotest_common.sh@850 -- # return 0 00:23:55.332 21:26:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:55.332 21:26:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:55.332 21:26:44 -- common/autotest_common.sh@10 -- # set +x 00:23:55.332 21:26:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.332 21:26:44 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:55.590 [2024-04-26 21:26:44.731018] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:55.590 [2024-04-26 21:26:44.731261] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:55.590 [2024-04-26 21:26:44.731391] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:55.590 21:26:44 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:23:55.590 21:26:44 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev ad011a58-4a51-47e3-9c56-a5985f7634bf 00:23:55.590 21:26:44 -- common/autotest_common.sh@885 -- # local bdev_name=ad011a58-4a51-47e3-9c56-a5985f7634bf 00:23:55.590 21:26:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:55.590 21:26:44 -- common/autotest_common.sh@887 -- # local i 00:23:55.590 21:26:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:55.590 21:26:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:55.590 21:26:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:55.849 21:26:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad011a58-4a51-47e3-9c56-a5985f7634bf -t 2000 00:23:56.108 [ 00:23:56.108 { 00:23:56.108 "aliases": [ 00:23:56.108 "lvs/lvol" 00:23:56.108 ], 00:23:56.108 "assigned_rate_limits": { 00:23:56.108 "r_mbytes_per_sec": 0, 00:23:56.108 "rw_ios_per_sec": 0, 00:23:56.108 "rw_mbytes_per_sec": 0, 00:23:56.108 "w_mbytes_per_sec": 0 00:23:56.108 }, 00:23:56.108 "block_size": 4096, 00:23:56.108 "claimed": false, 00:23:56.108 "driver_specific": { 00:23:56.108 "lvol": { 00:23:56.108 "base_bdev": "aio_bdev", 00:23:56.108 "clone": false, 00:23:56.108 "esnap_clone": false, 00:23:56.108 "lvol_store_uuid": "4ffcce4b-75ab-4d74-add3-bf40e6b6184f", 00:23:56.108 "snapshot": false, 00:23:56.108 "thin_provision": false 00:23:56.108 } 00:23:56.108 }, 00:23:56.108 "name": "ad011a58-4a51-47e3-9c56-a5985f7634bf", 00:23:56.108 "num_blocks": 38912, 00:23:56.108 "product_name": "Logical Volume", 00:23:56.108 "supported_io_types": { 00:23:56.108 "abort": false, 00:23:56.108 "compare": false, 00:23:56.108 "compare_and_write": false, 00:23:56.108 "flush": false, 00:23:56.108 "nvme_admin": false, 00:23:56.108 "nvme_io": false, 00:23:56.108 "read": true, 00:23:56.108 "reset": true, 00:23:56.108 "unmap": true, 00:23:56.108 "write": true, 00:23:56.108 "write_zeroes": true 00:23:56.108 }, 00:23:56.108 "uuid": "ad011a58-4a51-47e3-9c56-a5985f7634bf", 00:23:56.108 "zoned": false 00:23:56.108 } 00:23:56.108 ] 00:23:56.108 21:26:45 -- common/autotest_common.sh@893 -- # return 0 00:23:56.108 21:26:45 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:56.108 21:26:45 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:23:56.367 21:26:45 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:23:56.367 21:26:45 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:23:56.367 21:26:45 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:56.625 21:26:45 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:23:56.625 21:26:45 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:56.884 [2024-04-26 21:26:45.906637] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:23:56.884 21:26:45 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:56.884 21:26:45 -- common/autotest_common.sh@638 -- # local es=0 00:23:56.884 21:26:45 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:56.884 21:26:45 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.884 21:26:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:56.884 21:26:45 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.884 21:26:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:56.884 21:26:45 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.884 21:26:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:56.884 21:26:45 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:56.884 21:26:45 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:56.884 21:26:45 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:57.143 2024/04/26 21:26:46 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4ffcce4b-75ab-4d74-add3-bf40e6b6184f], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:23:57.143 request: 00:23:57.143 { 00:23:57.143 "method": "bdev_lvol_get_lvstores", 00:23:57.143 "params": { 00:23:57.143 "uuid": "4ffcce4b-75ab-4d74-add3-bf40e6b6184f" 00:23:57.143 } 00:23:57.143 } 00:23:57.143 Got JSON-RPC error response 00:23:57.143 GoRPCClient: error on JSON-RPC call 00:23:57.143 21:26:46 -- common/autotest_common.sh@641 -- # es=1 00:23:57.143 21:26:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:57.143 21:26:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:57.143 21:26:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:57.143 21:26:46 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:23:57.402 aio_bdev 00:23:57.402 21:26:46 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ad011a58-4a51-47e3-9c56-a5985f7634bf 00:23:57.402 21:26:46 -- common/autotest_common.sh@885 -- # local bdev_name=ad011a58-4a51-47e3-9c56-a5985f7634bf 00:23:57.402 21:26:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:57.402 21:26:46 -- common/autotest_common.sh@887 -- # local i 00:23:57.402 21:26:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:57.402 21:26:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:57.402 21:26:46 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:57.660 21:26:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ad011a58-4a51-47e3-9c56-a5985f7634bf -t 2000 00:23:57.660 [ 00:23:57.660 { 00:23:57.660 "aliases": [ 00:23:57.660 "lvs/lvol" 00:23:57.660 ], 00:23:57.660 "assigned_rate_limits": { 00:23:57.661 "r_mbytes_per_sec": 0, 00:23:57.661 "rw_ios_per_sec": 0, 00:23:57.661 "rw_mbytes_per_sec": 0, 00:23:57.661 "w_mbytes_per_sec": 0 00:23:57.661 }, 00:23:57.661 "block_size": 4096, 00:23:57.661 "claimed": false, 00:23:57.661 "driver_specific": { 00:23:57.661 "lvol": { 00:23:57.661 "base_bdev": "aio_bdev", 00:23:57.661 "clone": false, 00:23:57.661 "esnap_clone": false, 00:23:57.661 "lvol_store_uuid": "4ffcce4b-75ab-4d74-add3-bf40e6b6184f", 00:23:57.661 "snapshot": false, 00:23:57.661 "thin_provision": false 00:23:57.661 } 00:23:57.661 }, 00:23:57.661 "name": "ad011a58-4a51-47e3-9c56-a5985f7634bf", 00:23:57.661 "num_blocks": 38912, 00:23:57.661 "product_name": "Logical Volume", 00:23:57.661 "supported_io_types": { 00:23:57.661 "abort": false, 00:23:57.661 "compare": false, 00:23:57.661 "compare_and_write": false, 00:23:57.661 "flush": false, 00:23:57.661 "nvme_admin": false, 00:23:57.661 "nvme_io": false, 00:23:57.661 "read": true, 00:23:57.661 "reset": true, 00:23:57.661 "unmap": true, 00:23:57.661 "write": true, 00:23:57.661 "write_zeroes": true 00:23:57.661 }, 00:23:57.661 "uuid": "ad011a58-4a51-47e3-9c56-a5985f7634bf", 00:23:57.661 "zoned": false 00:23:57.661 } 00:23:57.661 ] 00:23:57.661 21:26:46 -- common/autotest_common.sh@893 -- # return 0 00:23:57.661 21:26:46 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:57.661 21:26:46 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:23:57.920 21:26:47 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:23:57.920 21:26:47 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:57.920 21:26:47 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:23:58.179 21:26:47 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:23:58.179 21:26:47 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ad011a58-4a51-47e3-9c56-a5985f7634bf 00:23:58.438 21:26:47 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ffcce4b-75ab-4d74-add3-bf40e6b6184f 00:23:58.696 21:26:47 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:23:58.954 21:26:48 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:23:59.522 00:23:59.522 real 0m19.675s 00:23:59.522 user 0m40.691s 00:23:59.522 sys 0m6.691s 00:23:59.522 21:26:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:59.522 21:26:48 -- common/autotest_common.sh@10 -- # set +x 00:23:59.522 ************************************ 00:23:59.522 END TEST lvs_grow_dirty 00:23:59.522 ************************************ 00:23:59.522 21:26:48 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:23:59.522 21:26:48 -- common/autotest_common.sh@794 -- # type=--id 00:23:59.522 21:26:48 -- common/autotest_common.sh@795 -- # id=0 00:23:59.522 21:26:48 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:23:59.522 21:26:48 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:59.522 21:26:48 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:23:59.522 21:26:48 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:23:59.522 21:26:48 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:23:59.522 21:26:48 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:59.522 nvmf_trace.0 00:23:59.522 21:26:48 -- common/autotest_common.sh@809 -- # return 0 00:23:59.522 21:26:48 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:23:59.522 21:26:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:59.522 21:26:48 -- nvmf/common.sh@117 -- # sync 00:23:59.522 21:26:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.522 21:26:48 -- nvmf/common.sh@120 -- # set +e 00:23:59.522 21:26:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.522 21:26:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.522 rmmod nvme_tcp 00:23:59.522 rmmod nvme_fabrics 00:23:59.782 rmmod nvme_keyring 00:23:59.782 21:26:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.782 21:26:48 -- nvmf/common.sh@124 -- # set -e 00:23:59.782 21:26:48 -- nvmf/common.sh@125 -- # return 0 00:23:59.782 21:26:48 -- nvmf/common.sh@478 -- # '[' -n 89349 ']' 00:23:59.782 21:26:48 -- nvmf/common.sh@479 -- # killprocess 89349 00:23:59.782 21:26:48 -- common/autotest_common.sh@936 -- # '[' -z 89349 ']' 00:23:59.782 21:26:48 -- common/autotest_common.sh@940 -- # kill -0 89349 00:23:59.782 21:26:48 -- common/autotest_common.sh@941 -- # uname 00:23:59.782 21:26:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:59.782 21:26:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89349 00:23:59.782 21:26:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:59.782 21:26:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:59.782 killing process with pid 89349 00:23:59.782 21:26:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89349' 00:23:59.782 21:26:48 -- common/autotest_common.sh@955 -- # kill 89349 00:23:59.782 21:26:48 -- common/autotest_common.sh@960 -- # wait 89349 00:24:00.042 21:26:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:00.042 21:26:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:00.042 21:26:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:00.042 21:26:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.042 21:26:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.042 21:26:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.042 21:26:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.042 21:26:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.042 21:26:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:00.042 00:24:00.042 real 0m38.911s 00:24:00.042 user 1m3.358s 00:24:00.042 sys 0m9.482s 00:24:00.042 21:26:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:00.042 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:24:00.042 ************************************ 00:24:00.042 END TEST nvmf_lvs_grow 00:24:00.042 ************************************ 00:24:00.042 21:26:49 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:24:00.042 21:26:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:00.042 21:26:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:00.042 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:24:00.042 ************************************ 00:24:00.042 START TEST nvmf_bdev_io_wait 00:24:00.042 ************************************ 00:24:00.042 21:26:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:24:00.302 * Looking for test storage... 00:24:00.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:00.302 21:26:49 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:00.302 21:26:49 -- nvmf/common.sh@7 -- # uname -s 00:24:00.302 21:26:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.303 21:26:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.303 21:26:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.303 21:26:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.303 21:26:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.303 21:26:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.303 21:26:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.303 21:26:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.303 21:26:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.303 21:26:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.303 21:26:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:24:00.303 21:26:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:24:00.303 21:26:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.303 21:26:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.303 21:26:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:00.303 21:26:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.303 21:26:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:00.303 21:26:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.303 21:26:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.303 21:26:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.303 21:26:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.303 21:26:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.303 21:26:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.303 21:26:49 -- paths/export.sh@5 -- # export PATH 00:24:00.303 21:26:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.303 21:26:49 -- nvmf/common.sh@47 -- # : 0 00:24:00.303 21:26:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:00.303 21:26:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:00.303 21:26:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.303 21:26:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.303 21:26:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.303 21:26:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:00.303 21:26:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:00.303 21:26:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:00.303 21:26:49 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:00.303 21:26:49 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:00.303 21:26:49 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:24:00.303 21:26:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:00.303 21:26:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.303 21:26:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:00.303 21:26:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:00.303 21:26:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:00.303 21:26:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.303 21:26:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.303 21:26:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.303 21:26:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:00.303 21:26:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:00.303 21:26:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:00.303 21:26:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:00.303 21:26:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:00.303 21:26:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:00.303 21:26:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.303 21:26:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.303 21:26:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:00.303 21:26:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:00.303 21:26:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:00.303 21:26:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:00.303 21:26:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:00.303 21:26:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.303 21:26:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:00.303 21:26:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:00.303 21:26:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:00.303 21:26:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:00.303 21:26:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:00.303 21:26:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:00.303 Cannot find device "nvmf_tgt_br" 00:24:00.303 21:26:49 -- nvmf/common.sh@155 -- # true 00:24:00.303 21:26:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.303 Cannot find device "nvmf_tgt_br2" 00:24:00.303 21:26:49 -- nvmf/common.sh@156 -- # true 00:24:00.303 21:26:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:00.303 21:26:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:00.303 Cannot find device "nvmf_tgt_br" 00:24:00.303 21:26:49 -- nvmf/common.sh@158 -- # true 00:24:00.303 21:26:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:00.303 Cannot find device "nvmf_tgt_br2" 00:24:00.303 21:26:49 -- nvmf/common.sh@159 -- # true 00:24:00.303 21:26:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:00.303 21:26:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:00.303 21:26:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.303 21:26:49 -- nvmf/common.sh@162 -- # true 00:24:00.303 21:26:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.303 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.303 21:26:49 -- nvmf/common.sh@163 -- # true 00:24:00.303 21:26:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:00.303 21:26:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:00.303 21:26:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:00.303 21:26:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:00.563 21:26:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:00.563 21:26:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:00.563 21:26:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:00.563 21:26:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:00.563 21:26:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:00.563 21:26:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:00.563 21:26:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:00.563 21:26:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:00.563 21:26:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:00.563 21:26:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:00.563 21:26:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:00.563 21:26:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:00.563 21:26:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:00.563 21:26:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:00.563 21:26:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:00.563 21:26:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:00.563 21:26:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:00.563 21:26:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:00.563 21:26:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:00.563 21:26:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:00.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:24:00.563 00:24:00.563 --- 10.0.0.2 ping statistics --- 00:24:00.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.563 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:00.563 21:26:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:00.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:00.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:24:00.563 00:24:00.563 --- 10.0.0.3 ping statistics --- 00:24:00.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.563 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:00.563 21:26:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:00.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:24:00.563 00:24:00.563 --- 10.0.0.1 ping statistics --- 00:24:00.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.564 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:00.564 21:26:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.564 21:26:49 -- nvmf/common.sh@422 -- # return 0 00:24:00.564 21:26:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:00.564 21:26:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.564 21:26:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:00.564 21:26:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:00.564 21:26:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.564 21:26:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:00.564 21:26:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:00.564 21:26:49 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:00.564 21:26:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:00.564 21:26:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:00.564 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:24:00.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.564 21:26:49 -- nvmf/common.sh@470 -- # nvmfpid=89762 00:24:00.564 21:26:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:00.564 21:26:49 -- nvmf/common.sh@471 -- # waitforlisten 89762 00:24:00.564 21:26:49 -- common/autotest_common.sh@817 -- # '[' -z 89762 ']' 00:24:00.564 21:26:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.564 21:26:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:00.564 21:26:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.564 21:26:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:00.564 21:26:49 -- common/autotest_common.sh@10 -- # set +x 00:24:00.564 [2024-04-26 21:26:49.765133] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:00.564 [2024-04-26 21:26:49.765225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.823 [2024-04-26 21:26:49.912688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.823 [2024-04-26 21:26:49.968504] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.823 [2024-04-26 21:26:49.968704] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.823 [2024-04-26 21:26:49.968747] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.823 [2024-04-26 21:26:49.968786] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.823 [2024-04-26 21:26:49.968827] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.823 [2024-04-26 21:26:49.969364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.823 [2024-04-26 21:26:49.969708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.823 [2024-04-26 21:26:49.969576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.823 [2024-04-26 21:26:49.969711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.760 21:26:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:01.760 21:26:50 -- common/autotest_common.sh@850 -- # return 0 00:24:01.760 21:26:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:01.760 21:26:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:01.760 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:01.760 21:26:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.760 21:26:50 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:24:01.760 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.760 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:01.760 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.760 21:26:50 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:24:01.760 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.760 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:01.760 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.760 21:26:50 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.760 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.760 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:01.760 [2024-04-26 21:26:50.791842] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.760 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.760 21:26:50 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:01.760 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.760 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:01.760 Malloc0 00:24:01.760 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.760 21:26:50 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.760 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.760 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:01.760 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.760 21:26:50 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.760 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.760 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:01.760 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.760 21:26:50 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.760 21:26:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.760 21:26:50 -- common/autotest_common.sh@10 -- # set +x 00:24:01.761 [2024-04-26 21:26:50.855123] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.761 21:26:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=89815 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@30 -- # READ_PID=89817 00:24:01.761 21:26:50 -- nvmf/common.sh@521 -- # config=() 00:24:01.761 21:26:50 -- nvmf/common.sh@521 -- # local subsystem config 00:24:01.761 21:26:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:01.761 21:26:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:01.761 { 00:24:01.761 "params": { 00:24:01.761 "name": "Nvme$subsystem", 00:24:01.761 "trtype": "$TEST_TRANSPORT", 00:24:01.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.761 "adrfam": "ipv4", 00:24:01.761 "trsvcid": "$NVMF_PORT", 00:24:01.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.761 "hdgst": ${hdgst:-false}, 00:24:01.761 "ddgst": ${ddgst:-false} 00:24:01.761 }, 00:24:01.761 "method": "bdev_nvme_attach_controller" 00:24:01.761 } 00:24:01.761 EOF 00:24:01.761 )") 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:24:01.761 21:26:50 -- nvmf/common.sh@521 -- # config=() 00:24:01.761 21:26:50 -- nvmf/common.sh@521 -- # local subsystem config 00:24:01.761 21:26:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:01.761 21:26:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:01.761 { 00:24:01.761 "params": { 00:24:01.761 "name": "Nvme$subsystem", 00:24:01.761 "trtype": "$TEST_TRANSPORT", 00:24:01.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.761 "adrfam": "ipv4", 00:24:01.761 "trsvcid": "$NVMF_PORT", 00:24:01.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.761 "hdgst": ${hdgst:-false}, 00:24:01.761 "ddgst": ${ddgst:-false} 00:24:01.761 }, 00:24:01.761 "method": "bdev_nvme_attach_controller" 00:24:01.761 } 00:24:01.761 EOF 00:24:01.761 )") 00:24:01.761 21:26:50 -- nvmf/common.sh@543 -- # cat 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:24:01.761 21:26:50 -- nvmf/common.sh@521 -- # config=() 00:24:01.761 21:26:50 -- nvmf/common.sh@521 -- # local subsystem config 00:24:01.761 21:26:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=89819 00:24:01.761 21:26:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:01.761 { 00:24:01.761 "params": { 00:24:01.761 "name": "Nvme$subsystem", 00:24:01.761 "trtype": "$TEST_TRANSPORT", 00:24:01.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.761 "adrfam": "ipv4", 00:24:01.761 "trsvcid": "$NVMF_PORT", 00:24:01.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.761 "hdgst": ${hdgst:-false}, 00:24:01.761 "ddgst": ${ddgst:-false} 00:24:01.761 }, 00:24:01.761 "method": "bdev_nvme_attach_controller" 00:24:01.761 } 00:24:01.761 EOF 00:24:01.761 )") 00:24:01.761 21:26:50 -- nvmf/common.sh@543 -- # cat 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=89824 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@35 -- # sync 00:24:01.761 21:26:50 -- nvmf/common.sh@543 -- # cat 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:24:01.761 21:26:50 -- nvmf/common.sh@545 -- # jq . 00:24:01.761 21:26:50 -- nvmf/common.sh@521 -- # config=() 00:24:01.761 21:26:50 -- nvmf/common.sh@521 -- # local subsystem config 00:24:01.761 21:26:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:01.761 21:26:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:01.761 { 00:24:01.761 "params": { 00:24:01.761 "name": "Nvme$subsystem", 00:24:01.761 "trtype": "$TEST_TRANSPORT", 00:24:01.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.761 "adrfam": "ipv4", 00:24:01.761 "trsvcid": "$NVMF_PORT", 00:24:01.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.761 "hdgst": ${hdgst:-false}, 00:24:01.761 "ddgst": ${ddgst:-false} 00:24:01.761 }, 00:24:01.761 "method": "bdev_nvme_attach_controller" 00:24:01.761 } 00:24:01.761 EOF 00:24:01.761 )") 00:24:01.761 21:26:50 -- nvmf/common.sh@543 -- # cat 00:24:01.761 21:26:50 -- nvmf/common.sh@546 -- # IFS=, 00:24:01.761 21:26:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:01.761 "params": { 00:24:01.761 "name": "Nvme1", 00:24:01.761 "trtype": "tcp", 00:24:01.761 "traddr": "10.0.0.2", 00:24:01.761 "adrfam": "ipv4", 00:24:01.761 "trsvcid": "4420", 00:24:01.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.761 "hdgst": false, 00:24:01.761 "ddgst": false 00:24:01.761 }, 00:24:01.761 "method": "bdev_nvme_attach_controller" 00:24:01.761 }' 00:24:01.761 21:26:50 -- nvmf/common.sh@545 -- # jq . 00:24:01.761 21:26:50 -- nvmf/common.sh@545 -- # jq . 00:24:01.761 21:26:50 -- nvmf/common.sh@546 -- # IFS=, 00:24:01.761 21:26:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:01.761 "params": { 00:24:01.761 "name": "Nvme1", 00:24:01.761 "trtype": "tcp", 00:24:01.761 "traddr": "10.0.0.2", 00:24:01.761 "adrfam": "ipv4", 00:24:01.761 "trsvcid": "4420", 00:24:01.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.761 "hdgst": false, 00:24:01.761 "ddgst": false 00:24:01.761 }, 00:24:01.761 "method": "bdev_nvme_attach_controller" 00:24:01.761 }' 00:24:01.761 21:26:50 -- nvmf/common.sh@546 -- # IFS=, 00:24:01.761 21:26:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:01.761 "params": { 00:24:01.761 "name": "Nvme1", 00:24:01.761 "trtype": "tcp", 00:24:01.761 "traddr": "10.0.0.2", 00:24:01.761 "adrfam": "ipv4", 00:24:01.761 "trsvcid": "4420", 00:24:01.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.761 "hdgst": false, 00:24:01.761 "ddgst": false 00:24:01.761 }, 00:24:01.761 "method": "bdev_nvme_attach_controller" 00:24:01.761 }' 00:24:01.761 21:26:50 -- nvmf/common.sh@545 -- # jq . 00:24:01.761 21:26:50 -- nvmf/common.sh@546 -- # IFS=, 00:24:01.761 21:26:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:01.761 "params": { 00:24:01.761 "name": "Nvme1", 00:24:01.761 "trtype": "tcp", 00:24:01.761 "traddr": "10.0.0.2", 00:24:01.761 "adrfam": "ipv4", 00:24:01.761 "trsvcid": "4420", 00:24:01.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.761 "hdgst": false, 00:24:01.761 "ddgst": false 00:24:01.761 }, 00:24:01.761 "method": "bdev_nvme_attach_controller" 00:24:01.761 }' 00:24:01.761 [2024-04-26 21:26:50.913589] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:01.761 [2024-04-26 21:26:50.913647] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:01.761 [2024-04-26 21:26:50.931405] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:01.761 [2024-04-26 21:26:50.931592] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:24:01.761 [2024-04-26 21:26:50.934219] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:01.761 [2024-04-26 21:26:50.934277] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:24:01.761 21:26:50 -- target/bdev_io_wait.sh@37 -- # wait 89815 00:24:01.761 [2024-04-26 21:26:50.937199] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:01.761 [2024-04-26 21:26:50.937252] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:24:02.082 [2024-04-26 21:26:51.107715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.082 [2024-04-26 21:26:51.140292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:02.082 [2024-04-26 21:26:51.166203] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.082 [2024-04-26 21:26:51.198139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:02.082 [2024-04-26 21:26:51.222954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.082 [2024-04-26 21:26:51.269322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:02.082 Running I/O for 1 seconds... 00:24:02.082 [2024-04-26 21:26:51.290906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.361 [2024-04-26 21:26:51.322955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:02.361 Running I/O for 1 seconds... 00:24:02.361 Running I/O for 1 seconds... 00:24:02.361 Running I/O for 1 seconds... 00:24:03.296 00:24:03.296 Latency(us) 00:24:03.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.297 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:24:03.297 Nvme1n1 : 1.02 6242.78 24.39 0.00 0.00 20185.64 8356.56 30678.86 00:24:03.297 =================================================================================================================== 00:24:03.297 Total : 6242.78 24.39 0.00 0.00 20185.64 8356.56 30678.86 00:24:03.297 00:24:03.297 Latency(us) 00:24:03.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.297 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:24:03.297 Nvme1n1 : 1.00 209770.33 819.42 0.00 0.00 608.00 250.41 1144.73 00:24:03.297 =================================================================================================================== 00:24:03.297 Total : 209770.33 819.42 0.00 0.00 608.00 250.41 1144.73 00:24:03.297 00:24:03.297 Latency(us) 00:24:03.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.297 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:24:03.297 Nvme1n1 : 1.01 10224.75 39.94 0.00 0.00 12469.41 6038.47 21520.99 00:24:03.297 =================================================================================================================== 00:24:03.297 Total : 10224.75 39.94 0.00 0.00 12469.41 6038.47 21520.99 00:24:03.297 00:24:03.297 Latency(us) 00:24:03.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.297 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:24:03.297 Nvme1n1 : 1.01 6164.88 24.08 0.00 0.00 20693.35 6181.56 48994.60 00:24:03.297 =================================================================================================================== 00:24:03.297 Total : 6164.88 24.08 0.00 0.00 20693.35 6181.56 48994.60 00:24:03.297 21:26:52 -- target/bdev_io_wait.sh@38 -- # wait 89817 00:24:03.557 21:26:52 -- target/bdev_io_wait.sh@39 -- # wait 89819 00:24:03.557 21:26:52 -- target/bdev_io_wait.sh@40 -- # wait 89824 00:24:03.557 21:26:52 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.557 21:26:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.557 21:26:52 -- common/autotest_common.sh@10 -- # set +x 00:24:03.557 21:26:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.557 21:26:52 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:24:03.557 21:26:52 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:24:03.557 21:26:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:03.557 21:26:52 -- nvmf/common.sh@117 -- # sync 00:24:03.557 21:26:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.557 21:26:52 -- nvmf/common.sh@120 -- # set +e 00:24:03.557 21:26:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.557 21:26:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.557 rmmod nvme_tcp 00:24:03.557 rmmod nvme_fabrics 00:24:03.557 rmmod nvme_keyring 00:24:03.557 21:26:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.557 21:26:52 -- nvmf/common.sh@124 -- # set -e 00:24:03.557 21:26:52 -- nvmf/common.sh@125 -- # return 0 00:24:03.557 21:26:52 -- nvmf/common.sh@478 -- # '[' -n 89762 ']' 00:24:03.557 21:26:52 -- nvmf/common.sh@479 -- # killprocess 89762 00:24:03.557 21:26:52 -- common/autotest_common.sh@936 -- # '[' -z 89762 ']' 00:24:03.557 21:26:52 -- common/autotest_common.sh@940 -- # kill -0 89762 00:24:03.557 21:26:52 -- common/autotest_common.sh@941 -- # uname 00:24:03.557 21:26:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:03.817 21:26:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89762 00:24:03.817 21:26:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:03.817 21:26:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:03.817 killing process with pid 89762 00:24:03.817 21:26:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89762' 00:24:03.817 21:26:52 -- common/autotest_common.sh@955 -- # kill 89762 00:24:03.817 21:26:52 -- common/autotest_common.sh@960 -- # wait 89762 00:24:03.817 21:26:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:03.817 21:26:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:03.817 21:26:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:03.817 21:26:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.817 21:26:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.817 21:26:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.817 21:26:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.817 21:26:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.817 21:26:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:04.076 00:24:04.076 real 0m3.905s 00:24:04.076 user 0m17.155s 00:24:04.076 sys 0m1.690s 00:24:04.076 21:26:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:04.076 21:26:53 -- common/autotest_common.sh@10 -- # set +x 00:24:04.076 ************************************ 00:24:04.076 END TEST nvmf_bdev_io_wait 00:24:04.076 ************************************ 00:24:04.076 21:26:53 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:04.076 21:26:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:04.076 21:26:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:04.076 21:26:53 -- common/autotest_common.sh@10 -- # set +x 00:24:04.076 ************************************ 00:24:04.076 START TEST nvmf_queue_depth 00:24:04.076 ************************************ 00:24:04.076 21:26:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:24:04.337 * Looking for test storage... 00:24:04.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:04.337 21:26:53 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:04.337 21:26:53 -- nvmf/common.sh@7 -- # uname -s 00:24:04.337 21:26:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.337 21:26:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.337 21:26:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.337 21:26:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.337 21:26:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.337 21:26:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.337 21:26:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.337 21:26:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.337 21:26:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.337 21:26:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.337 21:26:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:24:04.337 21:26:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:24:04.337 21:26:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.337 21:26:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.337 21:26:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:04.337 21:26:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.337 21:26:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:04.337 21:26:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.337 21:26:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.337 21:26:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.337 21:26:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.337 21:26:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.337 21:26:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.337 21:26:53 -- paths/export.sh@5 -- # export PATH 00:24:04.337 21:26:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.337 21:26:53 -- nvmf/common.sh@47 -- # : 0 00:24:04.337 21:26:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.337 21:26:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.337 21:26:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.337 21:26:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.337 21:26:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.337 21:26:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.337 21:26:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.337 21:26:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.337 21:26:53 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:24:04.337 21:26:53 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:24:04.337 21:26:53 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.337 21:26:53 -- target/queue_depth.sh@19 -- # nvmftestinit 00:24:04.337 21:26:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:04.337 21:26:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.337 21:26:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:04.337 21:26:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:04.337 21:26:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:04.337 21:26:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.337 21:26:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.337 21:26:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.337 21:26:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:04.337 21:26:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:04.337 21:26:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:04.337 21:26:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:04.337 21:26:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:04.337 21:26:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:04.337 21:26:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.337 21:26:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.337 21:26:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:04.337 21:26:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:04.337 21:26:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:04.337 21:26:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:04.337 21:26:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:04.337 21:26:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.337 21:26:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:04.337 21:26:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:04.337 21:26:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:04.337 21:26:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:04.337 21:26:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:04.337 21:26:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:04.337 Cannot find device "nvmf_tgt_br" 00:24:04.337 21:26:53 -- nvmf/common.sh@155 -- # true 00:24:04.337 21:26:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.337 Cannot find device "nvmf_tgt_br2" 00:24:04.337 21:26:53 -- nvmf/common.sh@156 -- # true 00:24:04.337 21:26:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:04.337 21:26:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:04.337 Cannot find device "nvmf_tgt_br" 00:24:04.337 21:26:53 -- nvmf/common.sh@158 -- # true 00:24:04.337 21:26:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:04.337 Cannot find device "nvmf_tgt_br2" 00:24:04.337 21:26:53 -- nvmf/common.sh@159 -- # true 00:24:04.337 21:26:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:04.337 21:26:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:04.337 21:26:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:04.337 21:26:53 -- nvmf/common.sh@162 -- # true 00:24:04.337 21:26:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:04.337 21:26:53 -- nvmf/common.sh@163 -- # true 00:24:04.337 21:26:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:04.337 21:26:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:04.337 21:26:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:04.598 21:26:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:04.598 21:26:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:04.598 21:26:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:04.598 21:26:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:04.598 21:26:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:04.598 21:26:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:04.598 21:26:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:04.598 21:26:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:04.598 21:26:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:04.598 21:26:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:04.598 21:26:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:04.598 21:26:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:04.598 21:26:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:04.598 21:26:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:04.598 21:26:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:04.598 21:26:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:04.598 21:26:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:04.598 21:26:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:04.598 21:26:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:04.598 21:26:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:04.598 21:26:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:04.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:24:04.598 00:24:04.598 --- 10.0.0.2 ping statistics --- 00:24:04.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.598 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:04.598 21:26:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:04.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:04.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:24:04.598 00:24:04.598 --- 10.0.0.3 ping statistics --- 00:24:04.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.598 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:04.598 21:26:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:04.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:04.598 00:24:04.598 --- 10.0.0.1 ping statistics --- 00:24:04.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.598 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:04.598 21:26:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.598 21:26:53 -- nvmf/common.sh@422 -- # return 0 00:24:04.598 21:26:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:04.598 21:26:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.598 21:26:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:04.598 21:26:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:04.598 21:26:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.598 21:26:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:04.598 21:26:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:04.598 21:26:53 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:24:04.598 21:26:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:04.598 21:26:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:04.598 21:26:53 -- common/autotest_common.sh@10 -- # set +x 00:24:04.598 21:26:53 -- nvmf/common.sh@470 -- # nvmfpid=90063 00:24:04.598 21:26:53 -- nvmf/common.sh@471 -- # waitforlisten 90063 00:24:04.598 21:26:53 -- common/autotest_common.sh@817 -- # '[' -z 90063 ']' 00:24:04.598 21:26:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.598 21:26:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:04.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.598 21:26:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.598 21:26:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:04.598 21:26:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:04.598 21:26:53 -- common/autotest_common.sh@10 -- # set +x 00:24:04.598 [2024-04-26 21:26:53.804742] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:04.598 [2024-04-26 21:26:53.804806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.858 [2024-04-26 21:26:53.934438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.858 [2024-04-26 21:26:53.986994] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.858 [2024-04-26 21:26:53.987038] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.858 [2024-04-26 21:26:53.987044] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.858 [2024-04-26 21:26:53.987066] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.858 [2024-04-26 21:26:53.987071] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.858 [2024-04-26 21:26:53.987099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.795 21:26:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:05.795 21:26:54 -- common/autotest_common.sh@850 -- # return 0 00:24:05.795 21:26:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:05.795 21:26:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:05.795 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:24:05.795 21:26:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.795 21:26:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.795 21:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.795 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:24:05.795 [2024-04-26 21:26:54.764638] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.795 21:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.795 21:26:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:05.795 21:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.795 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:24:05.795 Malloc0 00:24:05.795 21:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.795 21:26:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:05.795 21:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.795 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:24:05.795 21:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.795 21:26:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.795 21:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.795 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:24:05.795 21:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.795 21:26:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.795 21:26:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.795 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:24:05.795 [2024-04-26 21:26:54.824617] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.795 21:26:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.795 21:26:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=90114 00:24:05.795 21:26:54 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:24:05.795 21:26:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.795 21:26:54 -- target/queue_depth.sh@33 -- # waitforlisten 90114 /var/tmp/bdevperf.sock 00:24:05.795 21:26:54 -- common/autotest_common.sh@817 -- # '[' -z 90114 ']' 00:24:05.795 21:26:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.795 21:26:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:05.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.795 21:26:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.795 21:26:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:05.795 21:26:54 -- common/autotest_common.sh@10 -- # set +x 00:24:05.795 [2024-04-26 21:26:54.878776] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:05.796 [2024-04-26 21:26:54.878843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90114 ] 00:24:05.796 [2024-04-26 21:26:55.017352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.055 [2024-04-26 21:26:55.068853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.634 21:26:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:06.634 21:26:55 -- common/autotest_common.sh@850 -- # return 0 00:24:06.634 21:26:55 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:06.634 21:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.634 21:26:55 -- common/autotest_common.sh@10 -- # set +x 00:24:06.634 NVMe0n1 00:24:06.634 21:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.905 21:26:55 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:06.905 Running I/O for 10 seconds... 00:24:16.920 00:24:16.920 Latency(us) 00:24:16.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.920 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:24:16.920 Verification LBA range: start 0x0 length 0x4000 00:24:16.920 NVMe0n1 : 10.06 9893.56 38.65 0.00 0.00 103064.43 15797.32 74636.63 00:24:16.920 =================================================================================================================== 00:24:16.920 Total : 9893.56 38.65 0.00 0.00 103064.43 15797.32 74636.63 00:24:16.920 0 00:24:16.920 21:27:06 -- target/queue_depth.sh@39 -- # killprocess 90114 00:24:16.920 21:27:06 -- common/autotest_common.sh@936 -- # '[' -z 90114 ']' 00:24:16.920 21:27:06 -- common/autotest_common.sh@940 -- # kill -0 90114 00:24:16.920 21:27:06 -- common/autotest_common.sh@941 -- # uname 00:24:16.920 21:27:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:16.920 21:27:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90114 00:24:16.920 21:27:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:16.920 21:27:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:16.920 killing process with pid 90114 00:24:16.920 21:27:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90114' 00:24:16.920 21:27:06 -- common/autotest_common.sh@955 -- # kill 90114 00:24:16.920 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.920 00:24:16.920 Latency(us) 00:24:16.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.920 =================================================================================================================== 00:24:16.920 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.920 21:27:06 -- common/autotest_common.sh@960 -- # wait 90114 00:24:17.180 21:27:06 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:24:17.180 21:27:06 -- target/queue_depth.sh@43 -- # nvmftestfini 00:24:17.180 21:27:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:17.180 21:27:06 -- nvmf/common.sh@117 -- # sync 00:24:17.180 21:27:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.180 21:27:06 -- nvmf/common.sh@120 -- # set +e 00:24:17.180 21:27:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.180 21:27:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.180 rmmod nvme_tcp 00:24:17.180 rmmod nvme_fabrics 00:24:17.180 rmmod nvme_keyring 00:24:17.180 21:27:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.180 21:27:06 -- nvmf/common.sh@124 -- # set -e 00:24:17.180 21:27:06 -- nvmf/common.sh@125 -- # return 0 00:24:17.180 21:27:06 -- nvmf/common.sh@478 -- # '[' -n 90063 ']' 00:24:17.180 21:27:06 -- nvmf/common.sh@479 -- # killprocess 90063 00:24:17.180 21:27:06 -- common/autotest_common.sh@936 -- # '[' -z 90063 ']' 00:24:17.180 21:27:06 -- common/autotest_common.sh@940 -- # kill -0 90063 00:24:17.180 21:27:06 -- common/autotest_common.sh@941 -- # uname 00:24:17.180 21:27:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.180 21:27:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90063 00:24:17.180 21:27:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:17.180 21:27:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:17.180 killing process with pid 90063 00:24:17.180 21:27:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90063' 00:24:17.180 21:27:06 -- common/autotest_common.sh@955 -- # kill 90063 00:24:17.180 21:27:06 -- common/autotest_common.sh@960 -- # wait 90063 00:24:17.439 21:27:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:17.440 21:27:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:17.440 21:27:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:17.440 21:27:06 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.440 21:27:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.440 21:27:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.440 21:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.440 21:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.440 21:27:06 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:17.440 00:24:17.440 real 0m13.434s 00:24:17.440 user 0m23.304s 00:24:17.440 sys 0m1.875s 00:24:17.440 21:27:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:17.440 21:27:06 -- common/autotest_common.sh@10 -- # set +x 00:24:17.440 ************************************ 00:24:17.440 END TEST nvmf_queue_depth 00:24:17.440 ************************************ 00:24:17.699 21:27:06 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:24:17.699 21:27:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:17.699 21:27:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:17.699 21:27:06 -- common/autotest_common.sh@10 -- # set +x 00:24:17.699 ************************************ 00:24:17.699 START TEST nvmf_multipath 00:24:17.699 ************************************ 00:24:17.699 21:27:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:24:17.699 * Looking for test storage... 00:24:17.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:17.699 21:27:06 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:17.699 21:27:06 -- nvmf/common.sh@7 -- # uname -s 00:24:17.699 21:27:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.699 21:27:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.699 21:27:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.699 21:27:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.699 21:27:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.699 21:27:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.699 21:27:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.699 21:27:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.699 21:27:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.699 21:27:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.699 21:27:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:24:17.699 21:27:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:24:17.699 21:27:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.699 21:27:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.699 21:27:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:17.699 21:27:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.699 21:27:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:17.699 21:27:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.699 21:27:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.699 21:27:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.699 21:27:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.699 21:27:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.699 21:27:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.699 21:27:06 -- paths/export.sh@5 -- # export PATH 00:24:17.699 21:27:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.699 21:27:06 -- nvmf/common.sh@47 -- # : 0 00:24:17.699 21:27:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:17.699 21:27:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:17.699 21:27:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.699 21:27:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.699 21:27:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.699 21:27:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:17.699 21:27:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:17.699 21:27:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:17.699 21:27:06 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:17.699 21:27:06 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:17.699 21:27:06 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:17.699 21:27:06 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:17.699 21:27:06 -- target/multipath.sh@43 -- # nvmftestinit 00:24:17.699 21:27:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:17.699 21:27:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.699 21:27:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:17.699 21:27:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:17.699 21:27:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:17.699 21:27:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.699 21:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.699 21:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.699 21:27:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:17.699 21:27:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:17.699 21:27:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:17.699 21:27:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:17.699 21:27:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:17.699 21:27:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:17.699 21:27:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.699 21:27:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.699 21:27:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:17.699 21:27:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:17.699 21:27:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:17.699 21:27:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:17.699 21:27:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:17.699 21:27:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.699 21:27:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:17.699 21:27:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:17.699 21:27:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:17.699 21:27:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:17.699 21:27:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:17.962 21:27:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:17.962 Cannot find device "nvmf_tgt_br" 00:24:17.962 21:27:06 -- nvmf/common.sh@155 -- # true 00:24:17.962 21:27:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:17.962 Cannot find device "nvmf_tgt_br2" 00:24:17.962 21:27:06 -- nvmf/common.sh@156 -- # true 00:24:17.962 21:27:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:17.962 21:27:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:17.962 Cannot find device "nvmf_tgt_br" 00:24:17.962 21:27:07 -- nvmf/common.sh@158 -- # true 00:24:17.962 21:27:07 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:17.962 Cannot find device "nvmf_tgt_br2" 00:24:17.962 21:27:07 -- nvmf/common.sh@159 -- # true 00:24:17.962 21:27:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:17.962 21:27:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:17.962 21:27:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:17.962 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:17.962 21:27:07 -- nvmf/common.sh@162 -- # true 00:24:17.962 21:27:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:17.962 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:17.962 21:27:07 -- nvmf/common.sh@163 -- # true 00:24:17.962 21:27:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:17.962 21:27:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:17.962 21:27:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:17.962 21:27:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:17.962 21:27:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:17.962 21:27:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:17.962 21:27:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:17.962 21:27:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:17.962 21:27:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:17.962 21:27:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:17.962 21:27:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:17.962 21:27:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:17.962 21:27:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:17.962 21:27:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:17.962 21:27:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:17.962 21:27:07 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:17.962 21:27:07 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:18.221 21:27:07 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:18.221 21:27:07 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:18.221 21:27:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:18.221 21:27:07 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:18.221 21:27:07 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:18.221 21:27:07 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:18.221 21:27:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:18.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:24:18.221 00:24:18.221 --- 10.0.0.2 ping statistics --- 00:24:18.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.221 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:24:18.221 21:27:07 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:18.221 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:18.221 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:24:18.221 00:24:18.221 --- 10.0.0.3 ping statistics --- 00:24:18.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.221 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:24:18.221 21:27:07 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:18.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:24:18.221 00:24:18.221 --- 10.0.0.1 ping statistics --- 00:24:18.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.221 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:18.221 21:27:07 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.221 21:27:07 -- nvmf/common.sh@422 -- # return 0 00:24:18.221 21:27:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:18.221 21:27:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.221 21:27:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:18.221 21:27:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:18.221 21:27:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.221 21:27:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:18.221 21:27:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:18.221 21:27:07 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:24:18.221 21:27:07 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:24:18.221 21:27:07 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:24:18.221 21:27:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:18.221 21:27:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:18.221 21:27:07 -- common/autotest_common.sh@10 -- # set +x 00:24:18.221 21:27:07 -- nvmf/common.sh@470 -- # nvmfpid=90451 00:24:18.221 21:27:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:18.221 21:27:07 -- nvmf/common.sh@471 -- # waitforlisten 90451 00:24:18.221 21:27:07 -- common/autotest_common.sh@817 -- # '[' -z 90451 ']' 00:24:18.221 21:27:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.221 21:27:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:18.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.221 21:27:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.221 21:27:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:18.221 21:27:07 -- common/autotest_common.sh@10 -- # set +x 00:24:18.221 [2024-04-26 21:27:07.377141] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:18.221 [2024-04-26 21:27:07.377224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.480 [2024-04-26 21:27:07.505620] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.480 [2024-04-26 21:27:07.581966] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.480 [2024-04-26 21:27:07.582038] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.480 [2024-04-26 21:27:07.582050] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.480 [2024-04-26 21:27:07.582058] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.480 [2024-04-26 21:27:07.582065] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.480 [2024-04-26 21:27:07.582172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.480 [2024-04-26 21:27:07.582464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.480 [2024-04-26 21:27:07.582495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.480 [2024-04-26 21:27:07.582499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.475 21:27:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:19.475 21:27:08 -- common/autotest_common.sh@850 -- # return 0 00:24:19.475 21:27:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:19.475 21:27:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:19.475 21:27:08 -- common/autotest_common.sh@10 -- # set +x 00:24:19.475 21:27:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.475 21:27:08 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:19.475 [2024-04-26 21:27:08.687406] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.734 21:27:08 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:19.734 Malloc0 00:24:19.992 21:27:08 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:24:19.992 21:27:09 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:20.250 21:27:09 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:20.508 [2024-04-26 21:27:09.679841] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.508 21:27:09 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:20.766 [2024-04-26 21:27:09.887642] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:20.766 21:27:09 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:24:21.025 21:27:10 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:24:21.283 21:27:10 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:24:21.283 21:27:10 -- common/autotest_common.sh@1184 -- # local i=0 00:24:21.283 21:27:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.283 21:27:10 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:24:21.283 21:27:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:23.182 21:27:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:23.182 21:27:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:23.182 21:27:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:24:23.182 21:27:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:23.182 21:27:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.182 21:27:12 -- common/autotest_common.sh@1194 -- # return 0 00:24:23.182 21:27:12 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:24:23.182 21:27:12 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:24:23.182 21:27:12 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:24:23.182 21:27:12 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:24:23.182 21:27:12 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:24:23.182 21:27:12 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:24:23.182 21:27:12 -- target/multipath.sh@38 -- # return 0 00:24:23.182 21:27:12 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:24:23.182 21:27:12 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:24:23.182 21:27:12 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:24:23.182 21:27:12 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:24:23.182 21:27:12 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:24:23.182 21:27:12 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:24:23.182 21:27:12 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:24:23.182 21:27:12 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:24:23.182 21:27:12 -- target/multipath.sh@22 -- # local timeout=20 00:24:23.182 21:27:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:23.182 21:27:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:23.182 21:27:12 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:23.182 21:27:12 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:24:23.182 21:27:12 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:24:23.182 21:27:12 -- target/multipath.sh@22 -- # local timeout=20 00:24:23.182 21:27:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:23.182 21:27:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:23.182 21:27:12 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:23.182 21:27:12 -- target/multipath.sh@85 -- # echo numa 00:24:23.182 21:27:12 -- target/multipath.sh@88 -- # fio_pid=90589 00:24:23.182 21:27:12 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:24:23.182 21:27:12 -- target/multipath.sh@90 -- # sleep 1 00:24:23.182 [global] 00:24:23.182 thread=1 00:24:23.182 invalidate=1 00:24:23.182 rw=randrw 00:24:23.182 time_based=1 00:24:23.182 runtime=6 00:24:23.182 ioengine=libaio 00:24:23.182 direct=1 00:24:23.182 bs=4096 00:24:23.183 iodepth=128 00:24:23.183 norandommap=0 00:24:23.183 numjobs=1 00:24:23.183 00:24:23.183 verify_dump=1 00:24:23.183 verify_backlog=512 00:24:23.183 verify_state_save=0 00:24:23.183 do_verify=1 00:24:23.183 verify=crc32c-intel 00:24:23.183 [job0] 00:24:23.183 filename=/dev/nvme0n1 00:24:23.183 Could not set queue depth (nvme0n1) 00:24:23.441 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:23.441 fio-3.35 00:24:23.441 Starting 1 thread 00:24:24.376 21:27:13 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:24.376 21:27:13 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:24.635 21:27:13 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:24:24.635 21:27:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:24:24.635 21:27:13 -- target/multipath.sh@22 -- # local timeout=20 00:24:24.635 21:27:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:24.635 21:27:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:24.635 21:27:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:24.635 21:27:13 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:24:24.635 21:27:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:24:24.635 21:27:13 -- target/multipath.sh@22 -- # local timeout=20 00:24:24.635 21:27:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:24.635 21:27:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:24.635 21:27:13 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:24.635 21:27:13 -- target/multipath.sh@25 -- # sleep 1s 00:24:25.570 21:27:14 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:25.570 21:27:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:25.570 21:27:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:25.570 21:27:14 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:25.829 21:27:15 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:26.087 21:27:15 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:24:26.087 21:27:15 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:24:26.087 21:27:15 -- target/multipath.sh@22 -- # local timeout=20 00:24:26.087 21:27:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:26.087 21:27:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:26.087 21:27:15 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:26.087 21:27:15 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:24:26.087 21:27:15 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:24:26.087 21:27:15 -- target/multipath.sh@22 -- # local timeout=20 00:24:26.087 21:27:15 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:26.087 21:27:15 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:26.087 21:27:15 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:26.087 21:27:15 -- target/multipath.sh@25 -- # sleep 1s 00:24:27.022 21:27:16 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:27.022 21:27:16 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:27.022 21:27:16 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:27.022 21:27:16 -- target/multipath.sh@104 -- # wait 90589 00:24:29.626 00:24:29.626 job0: (groupid=0, jobs=1): err= 0: pid=90610: Fri Apr 26 21:27:18 2024 00:24:29.626 read: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(266MiB/6002msec) 00:24:29.626 slat (usec): min=4, max=6662, avg=45.70, stdev=187.80 00:24:29.626 clat (usec): min=294, max=22165, avg=7606.98, stdev=1421.32 00:24:29.626 lat (usec): min=329, max=22226, avg=7652.68, stdev=1427.15 00:24:29.626 clat percentiles (usec): 00:24:29.626 | 1.00th=[ 4228], 5.00th=[ 5604], 10.00th=[ 6128], 20.00th=[ 6652], 00:24:29.626 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7832], 00:24:29.626 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[10028], 00:24:29.626 | 99.00th=[11731], 99.50th=[13042], 99.90th=[17957], 99.95th=[19006], 00:24:29.626 | 99.99th=[19792] 00:24:29.626 bw ( KiB/s): min=15536, max=31480, per=54.01%, avg=24554.00, stdev=4563.34, samples=11 00:24:29.626 iops : min= 3884, max= 7870, avg=6138.36, stdev=1140.84, samples=11 00:24:29.626 write: IOPS=6853, BW=26.8MiB/s (28.1MB/s)(147MiB/5487msec); 0 zone resets 00:24:29.626 slat (usec): min=12, max=1768, avg=63.99, stdev=118.08 00:24:29.626 clat (usec): min=259, max=14383, avg=6423.56, stdev=1194.47 00:24:29.626 lat (usec): min=344, max=14581, avg=6487.55, stdev=1197.39 00:24:29.626 clat percentiles (usec): 00:24:29.626 | 1.00th=[ 3097], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5669], 00:24:29.626 | 30.00th=[ 5997], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6652], 00:24:29.626 | 70.00th=[ 6915], 80.00th=[ 7177], 90.00th=[ 7570], 95.00th=[ 8160], 00:24:29.626 | 99.00th=[10028], 99.50th=[10683], 99.90th=[12387], 99.95th=[12649], 00:24:29.626 | 99.99th=[13304] 00:24:29.626 bw ( KiB/s): min=16384, max=31024, per=89.48%, avg=24532.73, stdev=4227.44, samples=11 00:24:29.626 iops : min= 4096, max= 7756, avg=6133.09, stdev=1056.86, samples=11 00:24:29.626 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.02% 00:24:29.626 lat (msec) : 2=0.17%, 4=1.20%, 10=94.95%, 20=3.62%, 50=0.01% 00:24:29.626 cpu : usr=6.23%, sys=33.06%, ctx=7764, majf=0, minf=133 00:24:29.626 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:24:29.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:29.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:29.626 issued rwts: total=68220,37607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:29.626 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:29.626 00:24:29.626 Run status group 0 (all jobs): 00:24:29.626 READ: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=266MiB (279MB), run=6002-6002msec 00:24:29.626 WRITE: bw=26.8MiB/s (28.1MB/s), 26.8MiB/s-26.8MiB/s (28.1MB/s-28.1MB/s), io=147MiB (154MB), run=5487-5487msec 00:24:29.626 00:24:29.626 Disk stats (read/write): 00:24:29.626 nvme0n1: ios=67490/36745, merge=0/0, ticks=461489/209334, in_queue=670823, util=98.63% 00:24:29.626 21:27:18 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:29.887 21:27:18 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:24:30.147 21:27:19 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:24:30.147 21:27:19 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:24:30.147 21:27:19 -- target/multipath.sh@22 -- # local timeout=20 00:24:30.147 21:27:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:30.147 21:27:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:30.147 21:27:19 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:30.147 21:27:19 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:24:30.147 21:27:19 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:24:30.147 21:27:19 -- target/multipath.sh@22 -- # local timeout=20 00:24:30.147 21:27:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:30.147 21:27:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:30.147 21:27:19 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:24:30.147 21:27:19 -- target/multipath.sh@25 -- # sleep 1s 00:24:31.084 21:27:20 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:31.084 21:27:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:31.084 21:27:20 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:24:31.084 21:27:20 -- target/multipath.sh@113 -- # echo round-robin 00:24:31.084 21:27:20 -- target/multipath.sh@116 -- # fio_pid=90745 00:24:31.084 21:27:20 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:24:31.084 21:27:20 -- target/multipath.sh@118 -- # sleep 1 00:24:31.084 [global] 00:24:31.084 thread=1 00:24:31.084 invalidate=1 00:24:31.084 rw=randrw 00:24:31.084 time_based=1 00:24:31.084 runtime=6 00:24:31.084 ioengine=libaio 00:24:31.084 direct=1 00:24:31.084 bs=4096 00:24:31.084 iodepth=128 00:24:31.084 norandommap=0 00:24:31.084 numjobs=1 00:24:31.084 00:24:31.084 verify_dump=1 00:24:31.084 verify_backlog=512 00:24:31.084 verify_state_save=0 00:24:31.084 do_verify=1 00:24:31.084 verify=crc32c-intel 00:24:31.084 [job0] 00:24:31.084 filename=/dev/nvme0n1 00:24:31.085 Could not set queue depth (nvme0n1) 00:24:31.085 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:31.085 fio-3.35 00:24:31.085 Starting 1 thread 00:24:32.038 21:27:21 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:32.318 21:27:21 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:24:32.576 21:27:21 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:24:32.576 21:27:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:24:32.576 21:27:21 -- target/multipath.sh@22 -- # local timeout=20 00:24:32.576 21:27:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:32.576 21:27:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:32.576 21:27:21 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:32.576 21:27:21 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:24:32.576 21:27:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:24:32.576 21:27:21 -- target/multipath.sh@22 -- # local timeout=20 00:24:32.576 21:27:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:32.576 21:27:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:32.576 21:27:21 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:32.576 21:27:21 -- target/multipath.sh@25 -- # sleep 1s 00:24:33.511 21:27:22 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:33.511 21:27:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:33.511 21:27:22 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:33.511 21:27:22 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:33.770 21:27:22 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:24:34.028 21:27:23 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:24:34.028 21:27:23 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:24:34.028 21:27:23 -- target/multipath.sh@22 -- # local timeout=20 00:24:34.028 21:27:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:24:34.028 21:27:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:24:34.028 21:27:23 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:24:34.028 21:27:23 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:24:34.028 21:27:23 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:24:34.028 21:27:23 -- target/multipath.sh@22 -- # local timeout=20 00:24:34.028 21:27:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:24:34.028 21:27:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:34.028 21:27:23 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:34.028 21:27:23 -- target/multipath.sh@25 -- # sleep 1s 00:24:34.962 21:27:24 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:24:34.962 21:27:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:24:34.962 21:27:24 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:24:34.962 21:27:24 -- target/multipath.sh@132 -- # wait 90745 00:24:37.510 00:24:37.510 job0: (groupid=0, jobs=1): err= 0: pid=90766: Fri Apr 26 21:27:26 2024 00:24:37.510 read: IOPS=12.5k, BW=48.9MiB/s (51.3MB/s)(294MiB/6006msec) 00:24:37.510 slat (usec): min=3, max=5569, avg=39.13, stdev=171.20 00:24:37.510 clat (usec): min=407, max=45411, avg=7120.30, stdev=1590.99 00:24:37.510 lat (usec): min=414, max=45420, avg=7159.44, stdev=1601.62 00:24:37.510 clat percentiles (usec): 00:24:37.510 | 1.00th=[ 3621], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5932], 00:24:37.510 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 7439], 00:24:37.510 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 9503], 00:24:37.510 | 99.00th=[11469], 99.50th=[12125], 99.90th=[17695], 99.95th=[20579], 00:24:37.510 | 99.99th=[44827] 00:24:37.510 bw ( KiB/s): min=12592, max=36864, per=52.99%, avg=26545.18, stdev=6828.71, samples=11 00:24:37.510 iops : min= 3148, max= 9216, avg=6636.27, stdev=1707.17, samples=11 00:24:37.510 write: IOPS=7259, BW=28.4MiB/s (29.7MB/s)(149MiB/5250msec); 0 zone resets 00:24:37.510 slat (usec): min=10, max=2644, avg=55.95, stdev=103.36 00:24:37.510 clat (usec): min=227, max=20817, avg=5914.32, stdev=1524.21 00:24:37.510 lat (usec): min=299, max=20853, avg=5970.27, stdev=1534.33 00:24:37.510 clat percentiles (usec): 00:24:37.510 | 1.00th=[ 2507], 5.00th=[ 3425], 10.00th=[ 3916], 20.00th=[ 4621], 00:24:37.510 | 30.00th=[ 5276], 40.00th=[ 5735], 50.00th=[ 6128], 60.00th=[ 6390], 00:24:37.510 | 70.00th=[ 6652], 80.00th=[ 6915], 90.00th=[ 7373], 95.00th=[ 7898], 00:24:37.510 | 99.00th=[10290], 99.50th=[11076], 99.90th=[16450], 99.95th=[17695], 00:24:37.510 | 99.99th=[19006] 00:24:37.510 bw ( KiB/s): min=13144, max=35768, per=91.27%, avg=26502.36, stdev=6504.54, samples=11 00:24:37.510 iops : min= 3286, max= 8942, avg=6625.55, stdev=1626.12, samples=11 00:24:37.510 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:24:37.510 lat (msec) : 2=0.23%, 4=4.54%, 10=92.41%, 20=2.74%, 50=0.04% 00:24:37.510 cpu : usr=5.84%, sys=32.82%, ctx=8815, majf=0, minf=84 00:24:37.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:24:37.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:37.510 issued rwts: total=75218,38113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:37.510 00:24:37.510 Run status group 0 (all jobs): 00:24:37.510 READ: bw=48.9MiB/s (51.3MB/s), 48.9MiB/s-48.9MiB/s (51.3MB/s-51.3MB/s), io=294MiB (308MB), run=6006-6006msec 00:24:37.510 WRITE: bw=28.4MiB/s (29.7MB/s), 28.4MiB/s-28.4MiB/s (29.7MB/s-29.7MB/s), io=149MiB (156MB), run=5250-5250msec 00:24:37.510 00:24:37.510 Disk stats (read/write): 00:24:37.510 nvme0n1: ios=74610/37154, merge=0/0, ticks=474499/190742, in_queue=665241, util=98.67% 00:24:37.510 21:27:26 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:37.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:24:37.510 21:27:26 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:37.510 21:27:26 -- common/autotest_common.sh@1205 -- # local i=0 00:24:37.510 21:27:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:37.510 21:27:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:37.510 21:27:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:37.510 21:27:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:37.510 21:27:26 -- common/autotest_common.sh@1217 -- # return 0 00:24:37.510 21:27:26 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.768 21:27:26 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:24:37.768 21:27:26 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:24:37.768 21:27:26 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:24:37.768 21:27:26 -- target/multipath.sh@144 -- # nvmftestfini 00:24:37.768 21:27:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:37.768 21:27:26 -- nvmf/common.sh@117 -- # sync 00:24:37.768 21:27:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:37.768 21:27:26 -- nvmf/common.sh@120 -- # set +e 00:24:37.768 21:27:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:37.768 21:27:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:37.768 rmmod nvme_tcp 00:24:37.768 rmmod nvme_fabrics 00:24:37.768 rmmod nvme_keyring 00:24:37.768 21:27:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:37.768 21:27:26 -- nvmf/common.sh@124 -- # set -e 00:24:37.768 21:27:26 -- nvmf/common.sh@125 -- # return 0 00:24:37.768 21:27:26 -- nvmf/common.sh@478 -- # '[' -n 90451 ']' 00:24:37.768 21:27:26 -- nvmf/common.sh@479 -- # killprocess 90451 00:24:37.768 21:27:26 -- common/autotest_common.sh@936 -- # '[' -z 90451 ']' 00:24:37.768 21:27:26 -- common/autotest_common.sh@940 -- # kill -0 90451 00:24:37.768 21:27:26 -- common/autotest_common.sh@941 -- # uname 00:24:37.768 21:27:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:37.768 21:27:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90451 00:24:37.768 killing process with pid 90451 00:24:37.768 21:27:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:37.768 21:27:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:37.768 21:27:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90451' 00:24:37.768 21:27:26 -- common/autotest_common.sh@955 -- # kill 90451 00:24:37.768 21:27:26 -- common/autotest_common.sh@960 -- # wait 90451 00:24:38.026 21:27:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:38.026 21:27:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:38.026 21:27:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:38.026 21:27:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.026 21:27:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.026 21:27:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.026 21:27:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.026 21:27:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.026 21:27:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:38.026 00:24:38.026 real 0m20.425s 00:24:38.026 user 1m20.485s 00:24:38.026 sys 0m7.051s 00:24:38.026 21:27:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:38.026 21:27:27 -- common/autotest_common.sh@10 -- # set +x 00:24:38.026 ************************************ 00:24:38.026 END TEST nvmf_multipath 00:24:38.026 ************************************ 00:24:38.284 21:27:27 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:24:38.284 21:27:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:38.284 21:27:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:38.284 21:27:27 -- common/autotest_common.sh@10 -- # set +x 00:24:38.284 ************************************ 00:24:38.284 START TEST nvmf_zcopy 00:24:38.284 ************************************ 00:24:38.284 21:27:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:24:38.284 * Looking for test storage... 00:24:38.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:38.284 21:27:27 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:38.284 21:27:27 -- nvmf/common.sh@7 -- # uname -s 00:24:38.284 21:27:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.284 21:27:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.284 21:27:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.284 21:27:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.284 21:27:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.284 21:27:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.284 21:27:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.284 21:27:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.284 21:27:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.284 21:27:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.542 21:27:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:24:38.542 21:27:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:24:38.542 21:27:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.542 21:27:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.542 21:27:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:38.542 21:27:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.542 21:27:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:38.542 21:27:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.542 21:27:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.542 21:27:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.542 21:27:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.542 21:27:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.542 21:27:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.542 21:27:27 -- paths/export.sh@5 -- # export PATH 00:24:38.542 21:27:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.543 21:27:27 -- nvmf/common.sh@47 -- # : 0 00:24:38.543 21:27:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:38.543 21:27:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:38.543 21:27:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.543 21:27:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.543 21:27:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.543 21:27:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:38.543 21:27:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:38.543 21:27:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:38.543 21:27:27 -- target/zcopy.sh@12 -- # nvmftestinit 00:24:38.543 21:27:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:38.543 21:27:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.543 21:27:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:38.543 21:27:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:38.543 21:27:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:38.543 21:27:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.543 21:27:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.543 21:27:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.543 21:27:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:38.543 21:27:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:38.543 21:27:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:38.543 21:27:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:38.543 21:27:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:38.543 21:27:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:38.543 21:27:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.543 21:27:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.543 21:27:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:38.543 21:27:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:38.543 21:27:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:38.543 21:27:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:38.543 21:27:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:38.543 21:27:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.543 21:27:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:38.543 21:27:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:38.543 21:27:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:38.543 21:27:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:38.543 21:27:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:38.543 21:27:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:38.543 Cannot find device "nvmf_tgt_br" 00:24:38.543 21:27:27 -- nvmf/common.sh@155 -- # true 00:24:38.543 21:27:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:38.543 Cannot find device "nvmf_tgt_br2" 00:24:38.543 21:27:27 -- nvmf/common.sh@156 -- # true 00:24:38.543 21:27:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:38.543 21:27:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:38.543 Cannot find device "nvmf_tgt_br" 00:24:38.543 21:27:27 -- nvmf/common.sh@158 -- # true 00:24:38.543 21:27:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:38.543 Cannot find device "nvmf_tgt_br2" 00:24:38.543 21:27:27 -- nvmf/common.sh@159 -- # true 00:24:38.543 21:27:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:38.543 21:27:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:38.543 21:27:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:38.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.543 21:27:27 -- nvmf/common.sh@162 -- # true 00:24:38.543 21:27:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:38.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:38.543 21:27:27 -- nvmf/common.sh@163 -- # true 00:24:38.543 21:27:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:38.543 21:27:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:38.543 21:27:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:38.543 21:27:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:38.543 21:27:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:38.543 21:27:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:38.801 21:27:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:38.801 21:27:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:38.801 21:27:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:38.801 21:27:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:38.801 21:27:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:38.801 21:27:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:38.801 21:27:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:38.801 21:27:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:38.801 21:27:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:38.801 21:27:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:38.801 21:27:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:38.801 21:27:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:38.801 21:27:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:38.801 21:27:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:38.801 21:27:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:38.801 21:27:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:38.801 21:27:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:38.801 21:27:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:38.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:24:38.801 00:24:38.801 --- 10.0.0.2 ping statistics --- 00:24:38.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.801 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:38.801 21:27:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:38.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:38.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.127 ms 00:24:38.801 00:24:38.801 --- 10.0.0.3 ping statistics --- 00:24:38.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.801 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:24:38.801 21:27:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:38.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:24:38.801 00:24:38.801 --- 10.0.0.1 ping statistics --- 00:24:38.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.801 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:38.801 21:27:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.801 21:27:27 -- nvmf/common.sh@422 -- # return 0 00:24:38.801 21:27:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:38.801 21:27:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.801 21:27:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:38.801 21:27:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:38.801 21:27:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.801 21:27:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:38.801 21:27:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:38.801 21:27:27 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:24:38.801 21:27:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:38.801 21:27:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:38.801 21:27:27 -- common/autotest_common.sh@10 -- # set +x 00:24:38.801 21:27:27 -- nvmf/common.sh@470 -- # nvmfpid=91045 00:24:38.801 21:27:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:38.801 21:27:27 -- nvmf/common.sh@471 -- # waitforlisten 91045 00:24:38.801 21:27:27 -- common/autotest_common.sh@817 -- # '[' -z 91045 ']' 00:24:38.801 21:27:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.801 21:27:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:38.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.801 21:27:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.801 21:27:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:38.801 21:27:27 -- common/autotest_common.sh@10 -- # set +x 00:24:38.801 [2024-04-26 21:27:28.020065] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:38.801 [2024-04-26 21:27:28.020136] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.059 [2024-04-26 21:27:28.159565] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.059 [2024-04-26 21:27:28.215744] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.059 [2024-04-26 21:27:28.215790] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.059 [2024-04-26 21:27:28.215799] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.059 [2024-04-26 21:27:28.215805] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.059 [2024-04-26 21:27:28.215810] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.059 [2024-04-26 21:27:28.215842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.995 21:27:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:39.995 21:27:28 -- common/autotest_common.sh@850 -- # return 0 00:24:39.995 21:27:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:39.996 21:27:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:39.996 21:27:28 -- common/autotest_common.sh@10 -- # set +x 00:24:39.996 21:27:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.996 21:27:29 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:24:39.996 21:27:29 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:24:39.996 21:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.996 21:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:39.996 [2024-04-26 21:27:29.027188] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.996 21:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.996 21:27:29 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:24:39.996 21:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.996 21:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:39.996 21:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.996 21:27:29 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:39.996 21:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.996 21:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:39.996 [2024-04-26 21:27:29.051262] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.996 21:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.996 21:27:29 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:39.996 21:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.996 21:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:39.996 21:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.996 21:27:29 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:24:39.996 21:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.996 21:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:39.996 malloc0 00:24:39.996 21:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.996 21:27:29 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:39.996 21:27:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.996 21:27:29 -- common/autotest_common.sh@10 -- # set +x 00:24:39.996 21:27:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.996 21:27:29 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:24:39.996 21:27:29 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:24:39.996 21:27:29 -- nvmf/common.sh@521 -- # config=() 00:24:39.996 21:27:29 -- nvmf/common.sh@521 -- # local subsystem config 00:24:39.996 21:27:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:39.996 21:27:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:39.996 { 00:24:39.996 "params": { 00:24:39.996 "name": "Nvme$subsystem", 00:24:39.996 "trtype": "$TEST_TRANSPORT", 00:24:39.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.996 "adrfam": "ipv4", 00:24:39.996 "trsvcid": "$NVMF_PORT", 00:24:39.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.996 "hdgst": ${hdgst:-false}, 00:24:39.996 "ddgst": ${ddgst:-false} 00:24:39.996 }, 00:24:39.996 "method": "bdev_nvme_attach_controller" 00:24:39.996 } 00:24:39.996 EOF 00:24:39.996 )") 00:24:39.996 21:27:29 -- nvmf/common.sh@543 -- # cat 00:24:39.996 21:27:29 -- nvmf/common.sh@545 -- # jq . 00:24:39.996 21:27:29 -- nvmf/common.sh@546 -- # IFS=, 00:24:39.996 21:27:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:39.996 "params": { 00:24:39.996 "name": "Nvme1", 00:24:39.996 "trtype": "tcp", 00:24:39.996 "traddr": "10.0.0.2", 00:24:39.996 "adrfam": "ipv4", 00:24:39.996 "trsvcid": "4420", 00:24:39.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.996 "hdgst": false, 00:24:39.996 "ddgst": false 00:24:39.996 }, 00:24:39.996 "method": "bdev_nvme_attach_controller" 00:24:39.996 }' 00:24:39.996 [2024-04-26 21:27:29.162996] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:39.996 [2024-04-26 21:27:29.163069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91099 ] 00:24:40.254 [2024-04-26 21:27:29.330871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.254 [2024-04-26 21:27:29.386148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.513 Running I/O for 10 seconds... 00:24:50.544 00:24:50.544 Latency(us) 00:24:50.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.544 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:24:50.544 Verification LBA range: start 0x0 length 0x1000 00:24:50.544 Nvme1n1 : 10.01 6781.95 52.98 0.00 0.00 18818.58 1116.12 27817.03 00:24:50.544 =================================================================================================================== 00:24:50.544 Total : 6781.95 52.98 0.00 0.00 18818.58 1116.12 27817.03 00:24:50.544 21:27:39 -- target/zcopy.sh@39 -- # perfpid=91216 00:24:50.544 21:27:39 -- target/zcopy.sh@41 -- # xtrace_disable 00:24:50.544 21:27:39 -- common/autotest_common.sh@10 -- # set +x 00:24:50.544 21:27:39 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:24:50.544 21:27:39 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:24:50.544 21:27:39 -- nvmf/common.sh@521 -- # config=() 00:24:50.544 21:27:39 -- nvmf/common.sh@521 -- # local subsystem config 00:24:50.544 21:27:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:50.544 21:27:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:50.544 { 00:24:50.544 "params": { 00:24:50.544 "name": "Nvme$subsystem", 00:24:50.544 "trtype": "$TEST_TRANSPORT", 00:24:50.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:50.544 "adrfam": "ipv4", 00:24:50.544 "trsvcid": "$NVMF_PORT", 00:24:50.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:50.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:50.544 "hdgst": ${hdgst:-false}, 00:24:50.544 "ddgst": ${ddgst:-false} 00:24:50.544 }, 00:24:50.544 "method": "bdev_nvme_attach_controller" 00:24:50.544 } 00:24:50.544 EOF 00:24:50.544 )") 00:24:50.544 21:27:39 -- nvmf/common.sh@543 -- # cat 00:24:50.544 [2024-04-26 21:27:39.735565] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.544 [2024-04-26 21:27:39.735598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.544 21:27:39 -- nvmf/common.sh@545 -- # jq . 00:24:50.544 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.544 21:27:39 -- nvmf/common.sh@546 -- # IFS=, 00:24:50.544 21:27:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:50.544 "params": { 00:24:50.544 "name": "Nvme1", 00:24:50.544 "trtype": "tcp", 00:24:50.544 "traddr": "10.0.0.2", 00:24:50.544 "adrfam": "ipv4", 00:24:50.544 "trsvcid": "4420", 00:24:50.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:50.544 "hdgst": false, 00:24:50.544 "ddgst": false 00:24:50.544 }, 00:24:50.544 "method": "bdev_nvme_attach_controller" 00:24:50.544 }' 00:24:50.544 [2024-04-26 21:27:39.747515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.544 [2024-04-26 21:27:39.747537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.544 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.544 [2024-04-26 21:27:39.759484] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.544 [2024-04-26 21:27:39.759503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.544 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.544 [2024-04-26 21:27:39.771465] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.544 [2024-04-26 21:27:39.771488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.544 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.544 [2024-04-26 21:27:39.783441] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.544 [2024-04-26 21:27:39.783461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.544 [2024-04-26 21:27:39.786003] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:50.544 [2024-04-26 21:27:39.786062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91216 ] 00:24:50.544 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.544 [2024-04-26 21:27:39.795426] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.544 [2024-04-26 21:27:39.795444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.804 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.804 [2024-04-26 21:27:39.807413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.804 [2024-04-26 21:27:39.807433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.804 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.804 [2024-04-26 21:27:39.819396] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.804 [2024-04-26 21:27:39.819414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.804 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.804 [2024-04-26 21:27:39.831381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.831400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.843362] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.843381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.855349] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.855371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.867323] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.867346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.879304] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.879322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.891278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.891294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.903279] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.903299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.915243] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.915259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.927222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.927238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 [2024-04-26 21:27:39.929412] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.939215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.939240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.951188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.951205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.963179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.963201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.975148] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.975166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.983484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.805 [2024-04-26 21:27:39.987142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.987166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:39 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:39.999125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:39.999150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:40.011099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:40.011120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:40.023076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:40.023095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:40.035069] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:40.035089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:50.805 [2024-04-26 21:27:40.047056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:50.805 [2024-04-26 21:27:40.047077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:50.805 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.064 [2024-04-26 21:27:40.059020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.064 [2024-04-26 21:27:40.059050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.064 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.064 [2024-04-26 21:27:40.071027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.064 [2024-04-26 21:27:40.071051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.064 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.064 [2024-04-26 21:27:40.083035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.064 [2024-04-26 21:27:40.083064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.094978] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.095005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.106953] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.106976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.118929] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.118953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.130916] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.130940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 Running I/O for 5 seconds... 00:24:51.065 [2024-04-26 21:27:40.142883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.142902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.160044] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.160071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.176231] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.176258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.193170] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.193197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.210620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.210649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.226415] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.226443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.242739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.242766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.254503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.254536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.271317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.271363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.286534] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.286560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.301218] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.301245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.065 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.065 [2024-04-26 21:27:40.312922] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.065 [2024-04-26 21:27:40.312947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.328825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.328852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.344437] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.344467] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.356207] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.356236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.372003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.372038] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.388699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.388732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.405460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.405496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.422283] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.422320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.438063] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.438099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.453206] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.453244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.465869] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.325 [2024-04-26 21:27:40.465905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.325 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.325 [2024-04-26 21:27:40.481759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.326 [2024-04-26 21:27:40.481799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.326 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.326 [2024-04-26 21:27:40.498894] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.326 [2024-04-26 21:27:40.498936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.326 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.326 [2024-04-26 21:27:40.514377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.326 [2024-04-26 21:27:40.514413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.326 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.326 [2024-04-26 21:27:40.525743] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.326 [2024-04-26 21:27:40.525775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.326 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.326 [2024-04-26 21:27:40.542893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.326 [2024-04-26 21:27:40.542935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.326 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.326 [2024-04-26 21:27:40.558687] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.326 [2024-04-26 21:27:40.558721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.326 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.326 [2024-04-26 21:27:40.576067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.326 [2024-04-26 21:27:40.576109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.591619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.591665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.604000] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.604042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.620492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.620529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.637227] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.637270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.653394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.653430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.665325] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.665369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.676315] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.676361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.693082] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.693124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.709576] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.709621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.725899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.725936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.743011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.743048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.759010] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.759043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.769992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.770022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.785909] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.586 [2024-04-26 21:27:40.785937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.586 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.586 [2024-04-26 21:27:40.802175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.587 [2024-04-26 21:27:40.802206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.587 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.587 [2024-04-26 21:27:40.817725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.587 [2024-04-26 21:27:40.817758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.587 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.587 [2024-04-26 21:27:40.832429] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.587 [2024-04-26 21:27:40.832462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.587 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.846 [2024-04-26 21:27:40.844379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.844405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:40.859907] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.859937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:40.876467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.876498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:40.893467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.893494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:40.909992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.910019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:40.926254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.926281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:40.943109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.943135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:40.955016] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.955043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:40.970748] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.970773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:40.986078] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:40.986102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:40 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:41.001200] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:41.001225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:41.013106] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:41.013132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:41.029191] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:41.029216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:41.045214] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:41.045241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:41.058623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:41.058650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:41.074600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:41.074624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:51.847 [2024-04-26 21:27:41.091434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:51.847 [2024-04-26 21:27:41.091461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:51.847 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.108034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.108060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.124182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.124209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.140425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.140450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.152033] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.152057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.167066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.167092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.181302] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.181327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.196614] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.196641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.213025] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.213053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.229266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.229294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.245570] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.245597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.261734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.261760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.278541] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.278567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.294763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.294790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.311638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.311669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.327881] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.327906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.340285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.340310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.107 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.107 [2024-04-26 21:27:41.356899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.107 [2024-04-26 21:27:41.356925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.372716] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.372741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.384189] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.384213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.400590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.400616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.416164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.416190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.430600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.430627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.445341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.445369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.460287] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.460318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.471542] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.471571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.487681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.487710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.504026] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.504054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.520959] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.520989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.537139] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.537169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.549616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.549644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.561594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.561621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.578029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.578057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.594175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.594202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.367 [2024-04-26 21:27:41.606158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.367 [2024-04-26 21:27:41.606185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.367 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.627 [2024-04-26 21:27:41.622200] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.627 [2024-04-26 21:27:41.622227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.627 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.627 [2024-04-26 21:27:41.638024] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.627 [2024-04-26 21:27:41.638049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.627 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.627 [2024-04-26 21:27:41.648644] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.627 [2024-04-26 21:27:41.648667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.627 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.627 [2024-04-26 21:27:41.663785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.627 [2024-04-26 21:27:41.663809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.627 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.627 [2024-04-26 21:27:41.678347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.627 [2024-04-26 21:27:41.678372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.627 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.627 [2024-04-26 21:27:41.694806] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.627 [2024-04-26 21:27:41.694832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.627 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.627 [2024-04-26 21:27:41.710643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.627 [2024-04-26 21:27:41.710668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.627 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.627 [2024-04-26 21:27:41.722846] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.722874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.738093] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.738118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.748996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.749025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.765848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.765892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.780656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.780683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.796357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.796381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.812471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.812498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.823293] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.823320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.839357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.839382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.854955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.854982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.628 [2024-04-26 21:27:41.870789] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.628 [2024-04-26 21:27:41.870817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.628 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:41.886992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:41.887021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:41.903266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:41.903295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:41.919437] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:41.919462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:41.935850] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:41.935877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:41.952116] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:41.952143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:41.968006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:41.968036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:41.984409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:41.984437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:41 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:42.000711] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:42.000737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:42.012944] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:42.012973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:42.029528] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:42.029558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:42.045445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:42.045471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:42.060478] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:42.060506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:42.077723] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:42.077750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:42.093865] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:42.093893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:42.110168] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:42.110197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:52.887 [2024-04-26 21:27:42.126573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:52.887 [2024-04-26 21:27:42.126601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:52.887 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.145 [2024-04-26 21:27:42.141627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.145 [2024-04-26 21:27:42.141655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.145 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.145 [2024-04-26 21:27:42.157451] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.145 [2024-04-26 21:27:42.157478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.145 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.145 [2024-04-26 21:27:42.173964] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.145 [2024-04-26 21:27:42.173993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.145 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.145 [2024-04-26 21:27:42.190033] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.145 [2024-04-26 21:27:42.190060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.145 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.145 [2024-04-26 21:27:42.206134] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.145 [2024-04-26 21:27:42.206162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.145 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.219857] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.219885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.236115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.236144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.252276] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.252302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.268750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.268779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.285267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.285297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.302307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.302348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.318327] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.318369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.330308] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.330344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.345762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.345788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.361741] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.361768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.375648] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.375678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.146 [2024-04-26 21:27:42.391413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.146 [2024-04-26 21:27:42.391442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.146 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.407251] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.407281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.418950] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.418975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.434245] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.434269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.446081] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.446107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.462525] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.462553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.478609] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.478639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.491158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.491186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.507714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.507745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.524258] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.524288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.540536] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.540569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.557625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.557661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.574029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.574060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.585208] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.585238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.602026] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.602057] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.617033] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.617063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.632385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.632415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.409 [2024-04-26 21:27:42.647549] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.409 [2024-04-26 21:27:42.647583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.409 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.671 [2024-04-26 21:27:42.664991] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.671 [2024-04-26 21:27:42.665025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.671 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.671 [2024-04-26 21:27:42.681063] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.671 [2024-04-26 21:27:42.681096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.671 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.671 [2024-04-26 21:27:42.698737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.671 [2024-04-26 21:27:42.698770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.671 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.671 [2024-04-26 21:27:42.714474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.671 [2024-04-26 21:27:42.714505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.671 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.671 [2024-04-26 21:27:42.731110] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.671 [2024-04-26 21:27:42.731139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.671 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.671 [2024-04-26 21:27:42.747685] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.671 [2024-04-26 21:27:42.747715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.671 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.763798] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.763827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.774814] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.774841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.790450] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.790479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.806336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.806372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.820503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.820528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.835455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.835493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.849405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.849428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.863481] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.863505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.877858] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.877898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.892438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.892468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.904381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.904409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.672 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.672 [2024-04-26 21:27:42.919812] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.672 [2024-04-26 21:27:42.919841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.931 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.931 [2024-04-26 21:27:42.935878] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.931 [2024-04-26 21:27:42.935905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.931 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.931 [2024-04-26 21:27:42.950646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.931 [2024-04-26 21:27:42.950673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.931 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.931 [2024-04-26 21:27:42.966737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.931 [2024-04-26 21:27:42.966765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.931 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.931 [2024-04-26 21:27:42.978233] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.931 [2024-04-26 21:27:42.978260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.931 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.931 [2024-04-26 21:27:42.994707] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.931 [2024-04-26 21:27:42.994737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.931 2024/04/26 21:27:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.010690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.010719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.025462] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.025490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.041627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.041654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.055808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.055834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.067143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.067170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.083275] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.083305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.100316] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.100354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.115336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.115370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.129555] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.129590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.144747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.144773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.160945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.160973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:53.932 [2024-04-26 21:27:43.173092] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:53.932 [2024-04-26 21:27:43.173119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:53.932 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.188098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.188127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.199539] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.199565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.215138] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.215164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.230479] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.230506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.244989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.245014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.256692] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.256718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.271928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.271953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.283387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.283413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.299484] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.299508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.314582] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.314607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.330272] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.330301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.346091] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.346119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.360574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.360601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.376929] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.376956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.392282] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.392309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.406608] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.406639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.421928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.421957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.192 [2024-04-26 21:27:43.438683] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.192 [2024-04-26 21:27:43.438714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.192 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.455747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.455779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.472044] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.472077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.488686] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.488718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.505721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.505753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.521512] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.521540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.533307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.533346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.549063] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.549095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.569228] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.569259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.585557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.585587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.596698] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.596724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.611763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.611789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.622891] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.622917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.638477] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.638503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.654248] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.654277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.668602] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.668630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.683686] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.683712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.452 [2024-04-26 21:27:43.694843] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.452 [2024-04-26 21:27:43.694871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.452 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.709586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.709612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.724480] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.724509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.740592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.740622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.756344] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.756369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.770719] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.770749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.785861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.785891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.802533] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.802562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.818730] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.818762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.830714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.830747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.846553] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.846584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.862279] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.862307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.877069] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.727 [2024-04-26 21:27:43.877100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.727 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.727 [2024-04-26 21:27:43.893420] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.728 [2024-04-26 21:27:43.893449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.728 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.728 [2024-04-26 21:27:43.908807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.728 [2024-04-26 21:27:43.908837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.728 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.728 [2024-04-26 21:27:43.923543] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.728 [2024-04-26 21:27:43.923572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.728 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.728 [2024-04-26 21:27:43.938603] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.728 [2024-04-26 21:27:43.938634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.728 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.728 [2024-04-26 21:27:43.949337] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.728 [2024-04-26 21:27:43.949364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.728 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:54.728 [2024-04-26 21:27:43.965848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:54.728 [2024-04-26 21:27:43.965875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:54.728 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.001 [2024-04-26 21:27:43.981385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.001 [2024-04-26 21:27:43.981416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.001 2024/04/26 21:27:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.001 [2024-04-26 21:27:43.996137] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.001 [2024-04-26 21:27:43.996164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.001 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.001 [2024-04-26 21:27:44.011989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.001 [2024-04-26 21:27:44.012017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.001 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.001 [2024-04-26 21:27:44.026616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.001 [2024-04-26 21:27:44.026652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.001 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.001 [2024-04-26 21:27:44.041639] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.001 [2024-04-26 21:27:44.041671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.001 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.001 [2024-04-26 21:27:44.059016] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.001 [2024-04-26 21:27:44.059063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.001 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.001 [2024-04-26 21:27:44.075342] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.001 [2024-04-26 21:27:44.075393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.001 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.001 [2024-04-26 21:27:44.091573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.091607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.108852] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.108889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.124286] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.124326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.136409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.136440] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.147171] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.147202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.163721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.163753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.180440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.180472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.191489] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.191518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.200232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.200263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.213578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.213609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.002 [2024-04-26 21:27:44.226992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.002 [2024-04-26 21:27:44.227033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.002 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.243623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.243651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.260209] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.260239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.277377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.277410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.292682] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.292711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.304226] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.304262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.320795] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.320823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.336927] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.336958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.348767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.348798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.364934] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.364963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.381573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.381604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.398810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.398846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.415977] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.416011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.432157] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.432193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.449122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.449153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.466175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.466207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.483238] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.483273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.271 [2024-04-26 21:27:44.500580] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.271 [2024-04-26 21:27:44.500617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.271 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.516429] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.516471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.533799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.533849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.549408] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.549447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.558862] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.558897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.575589] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.575624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.592272] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.592309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.607689] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.607723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.619360] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.619389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.629690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.629719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.646563] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.646593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.663349] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.663379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.680374] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.680405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.697154] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.697186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.709470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.709500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.718307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.718351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.730492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.730522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.566 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.566 [2024-04-26 21:27:44.739120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.566 [2024-04-26 21:27:44.739149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.567 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.567 [2024-04-26 21:27:44.755236] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.567 [2024-04-26 21:27:44.755269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.567 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.567 [2024-04-26 21:27:44.765646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.567 [2024-04-26 21:27:44.765674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.567 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.567 [2024-04-26 21:27:44.781162] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.567 [2024-04-26 21:27:44.781192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.567 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.567 [2024-04-26 21:27:44.792168] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.567 [2024-04-26 21:27:44.792195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.567 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.567 [2024-04-26 21:27:44.801951] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.567 [2024-04-26 21:27:44.801979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.817483] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.817511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.829504] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.829531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.839138] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.839165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.854490] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.854516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.866116] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.866149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.882237] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.882268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.897726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.897758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.912784] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.912811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.928624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.928653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.842 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.842 [2024-04-26 21:27:44.943726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.842 [2024-04-26 21:27:44.943754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:44.955161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:44.955190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:44.963810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:44.963842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:44.973191] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:44.973224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:44.982351] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:44.982399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:44.996149] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:44.996179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:45.011839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:45.011868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:45.028127] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:45.028158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:45.039377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:45.039403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:45.056056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:45.056084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:55.843 [2024-04-26 21:27:45.072339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:55.843 [2024-04-26 21:27:45.072381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:55.843 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.117 [2024-04-26 21:27:45.089131] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.117 [2024-04-26 21:27:45.089163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.117 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.117 [2024-04-26 21:27:45.105672] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.117 [2024-04-26 21:27:45.105701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.117 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.117 [2024-04-26 21:27:45.121445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.117 [2024-04-26 21:27:45.121474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.117 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.117 [2024-04-26 21:27:45.135220] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.117 [2024-04-26 21:27:45.135254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.117 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.117 00:24:56.117 Latency(us) 00:24:56.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.117 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:24:56.117 Nvme1n1 : 5.01 14202.85 110.96 0.00 0.00 9002.86 3892.09 21292.05 00:24:56.117 =================================================================================================================== 00:24:56.117 Total : 14202.85 110.96 0.00 0.00 9002.86 3892.09 21292.05 00:24:56.117 [2024-04-26 21:27:45.145324] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.117 [2024-04-26 21:27:45.145363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.117 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.117 [2024-04-26 21:27:45.157294] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.117 [2024-04-26 21:27:45.157321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.117 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.117 [2024-04-26 21:27:45.169286] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.117 [2024-04-26 21:27:45.169317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.181258] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.181287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.193243] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.193270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.205222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.205252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.217194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.217222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.229169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.229195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.241152] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.241177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.253159] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.253191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.265140] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.265167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.277093] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.277118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.289073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.289094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.301077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.301104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.313047] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.313067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 [2024-04-26 21:27:45.325011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:24:56.118 [2024-04-26 21:27:45.325029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:24:56.118 2024/04/26 21:27:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:24:56.118 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (91216) - No such process 00:24:56.118 21:27:45 -- target/zcopy.sh@49 -- # wait 91216 00:24:56.118 21:27:45 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:24:56.118 21:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.118 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:24:56.118 21:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.118 21:27:45 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:24:56.118 21:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.118 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:24:56.118 delay0 00:24:56.118 21:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.118 21:27:45 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:24:56.118 21:27:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.118 21:27:45 -- common/autotest_common.sh@10 -- # set +x 00:24:56.396 21:27:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.396 21:27:45 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:24:56.396 [2024-04-26 21:27:45.538090] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:03.041 Initializing NVMe Controllers 00:25:03.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:03.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:03.041 Initialization complete. Launching workers. 00:25:03.041 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 81 00:25:03.041 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 368, failed to submit 33 00:25:03.041 success 182, unsuccess 186, failed 0 00:25:03.041 21:27:51 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:25:03.041 21:27:51 -- target/zcopy.sh@60 -- # nvmftestfini 00:25:03.041 21:27:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:03.041 21:27:51 -- nvmf/common.sh@117 -- # sync 00:25:03.041 21:27:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.041 21:27:51 -- nvmf/common.sh@120 -- # set +e 00:25:03.041 21:27:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.041 21:27:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.041 rmmod nvme_tcp 00:25:03.041 rmmod nvme_fabrics 00:25:03.041 rmmod nvme_keyring 00:25:03.041 21:27:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.041 21:27:51 -- nvmf/common.sh@124 -- # set -e 00:25:03.041 21:27:51 -- nvmf/common.sh@125 -- # return 0 00:25:03.041 21:27:51 -- nvmf/common.sh@478 -- # '[' -n 91045 ']' 00:25:03.041 21:27:51 -- nvmf/common.sh@479 -- # killprocess 91045 00:25:03.041 21:27:51 -- common/autotest_common.sh@936 -- # '[' -z 91045 ']' 00:25:03.041 21:27:51 -- common/autotest_common.sh@940 -- # kill -0 91045 00:25:03.041 21:27:51 -- common/autotest_common.sh@941 -- # uname 00:25:03.041 21:27:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:03.041 21:27:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91045 00:25:03.041 21:27:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:03.041 21:27:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:03.041 killing process with pid 91045 00:25:03.041 21:27:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91045' 00:25:03.041 21:27:51 -- common/autotest_common.sh@955 -- # kill 91045 00:25:03.041 21:27:51 -- common/autotest_common.sh@960 -- # wait 91045 00:25:03.041 21:27:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:03.041 21:27:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:03.041 21:27:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:03.041 21:27:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.041 21:27:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.041 21:27:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.041 21:27:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.041 21:27:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.041 21:27:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:03.041 00:25:03.041 real 0m24.569s 00:25:03.041 user 0m40.846s 00:25:03.041 sys 0m5.752s 00:25:03.041 21:27:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:03.041 21:27:51 -- common/autotest_common.sh@10 -- # set +x 00:25:03.041 ************************************ 00:25:03.041 END TEST nvmf_zcopy 00:25:03.041 ************************************ 00:25:03.041 21:27:52 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:25:03.041 21:27:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:03.041 21:27:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:03.041 21:27:52 -- common/autotest_common.sh@10 -- # set +x 00:25:03.041 ************************************ 00:25:03.041 START TEST nvmf_nmic 00:25:03.041 ************************************ 00:25:03.041 21:27:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:25:03.041 * Looking for test storage... 00:25:03.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:03.041 21:27:52 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:03.041 21:27:52 -- nvmf/common.sh@7 -- # uname -s 00:25:03.041 21:27:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.041 21:27:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.041 21:27:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.041 21:27:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.041 21:27:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.042 21:27:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.042 21:27:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.042 21:27:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.042 21:27:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.042 21:27:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.042 21:27:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:25:03.042 21:27:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:25:03.042 21:27:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.042 21:27:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.042 21:27:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:03.042 21:27:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.042 21:27:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.042 21:27:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.042 21:27:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.042 21:27:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.042 21:27:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.042 21:27:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.042 21:27:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.042 21:27:52 -- paths/export.sh@5 -- # export PATH 00:25:03.042 21:27:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.042 21:27:52 -- nvmf/common.sh@47 -- # : 0 00:25:03.042 21:27:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.042 21:27:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.042 21:27:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.042 21:27:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.042 21:27:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.042 21:27:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.042 21:27:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.042 21:27:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.042 21:27:52 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.042 21:27:52 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.042 21:27:52 -- target/nmic.sh@14 -- # nvmftestinit 00:25:03.042 21:27:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:03.042 21:27:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.042 21:27:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:03.042 21:27:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:03.042 21:27:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:03.042 21:27:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.042 21:27:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.042 21:27:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.042 21:27:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:03.042 21:27:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:03.042 21:27:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:03.042 21:27:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:03.042 21:27:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:03.042 21:27:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:03.042 21:27:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.042 21:27:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.042 21:27:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:03.042 21:27:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:03.042 21:27:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:03.042 21:27:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:03.042 21:27:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:03.042 21:27:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.042 21:27:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:03.042 21:27:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:03.042 21:27:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:03.042 21:27:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:03.042 21:27:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:03.042 21:27:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:03.042 Cannot find device "nvmf_tgt_br" 00:25:03.042 21:27:52 -- nvmf/common.sh@155 -- # true 00:25:03.042 21:27:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.042 Cannot find device "nvmf_tgt_br2" 00:25:03.042 21:27:52 -- nvmf/common.sh@156 -- # true 00:25:03.042 21:27:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:03.042 21:27:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:03.327 Cannot find device "nvmf_tgt_br" 00:25:03.327 21:27:52 -- nvmf/common.sh@158 -- # true 00:25:03.327 21:27:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:03.327 Cannot find device "nvmf_tgt_br2" 00:25:03.327 21:27:52 -- nvmf/common.sh@159 -- # true 00:25:03.327 21:27:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:03.327 21:27:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:03.327 21:27:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:03.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.327 21:27:52 -- nvmf/common.sh@162 -- # true 00:25:03.327 21:27:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:03.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.327 21:27:52 -- nvmf/common.sh@163 -- # true 00:25:03.327 21:27:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:03.327 21:27:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:03.327 21:27:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:03.327 21:27:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:03.327 21:27:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:03.327 21:27:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:03.327 21:27:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:03.327 21:27:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:03.327 21:27:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:03.327 21:27:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:03.327 21:27:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:03.327 21:27:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:03.327 21:27:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:03.327 21:27:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:03.327 21:27:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:03.327 21:27:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:03.327 21:27:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:03.327 21:27:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:03.327 21:27:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:03.327 21:27:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:03.327 21:27:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:03.327 21:27:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:03.606 21:27:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:03.606 21:27:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:03.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:25:03.606 00:25:03.606 --- 10.0.0.2 ping statistics --- 00:25:03.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.606 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:03.606 21:27:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:03.606 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:03.606 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:25:03.606 00:25:03.606 --- 10.0.0.3 ping statistics --- 00:25:03.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.606 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:25:03.606 21:27:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:03.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:03.606 00:25:03.606 --- 10.0.0.1 ping statistics --- 00:25:03.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.606 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:03.606 21:27:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.606 21:27:52 -- nvmf/common.sh@422 -- # return 0 00:25:03.606 21:27:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:03.606 21:27:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.606 21:27:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:03.606 21:27:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:03.606 21:27:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.606 21:27:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:03.606 21:27:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:03.606 21:27:52 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:25:03.606 21:27:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:03.606 21:27:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:03.606 21:27:52 -- common/autotest_common.sh@10 -- # set +x 00:25:03.606 21:27:52 -- nvmf/common.sh@470 -- # nvmfpid=91535 00:25:03.606 21:27:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:03.606 21:27:52 -- nvmf/common.sh@471 -- # waitforlisten 91535 00:25:03.606 21:27:52 -- common/autotest_common.sh@817 -- # '[' -z 91535 ']' 00:25:03.606 21:27:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.606 21:27:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:03.606 21:27:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.606 21:27:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:03.606 21:27:52 -- common/autotest_common.sh@10 -- # set +x 00:25:03.606 [2024-04-26 21:27:52.663703] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:03.606 [2024-04-26 21:27:52.663780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.606 [2024-04-26 21:27:52.796480] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.864 [2024-04-26 21:27:52.853506] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.864 [2024-04-26 21:27:52.853559] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.864 [2024-04-26 21:27:52.853567] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.864 [2024-04-26 21:27:52.853572] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.864 [2024-04-26 21:27:52.853578] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.864 [2024-04-26 21:27:52.853685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.864 [2024-04-26 21:27:52.856415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.864 [2024-04-26 21:27:52.856482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.864 [2024-04-26 21:27:52.856487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.435 21:27:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:04.435 21:27:53 -- common/autotest_common.sh@850 -- # return 0 00:25:04.435 21:27:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:04.435 21:27:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:04.435 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.435 21:27:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.435 21:27:53 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.435 21:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.435 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.435 [2024-04-26 21:27:53.598522] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.435 21:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.435 21:27:53 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:04.435 21:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.435 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.435 Malloc0 00:25:04.435 21:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.435 21:27:53 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:04.435 21:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.435 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.435 21:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.435 21:27:53 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:04.435 21:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.435 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.435 21:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.435 21:27:53 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:04.435 21:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.435 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.435 [2024-04-26 21:27:53.676685] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.435 21:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.435 test case1: single bdev can't be used in multiple subsystems 00:25:04.435 21:27:53 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:25:04.435 21:27:53 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:04.435 21:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.435 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.694 21:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.694 21:27:53 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:04.694 21:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.694 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.694 21:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.694 21:27:53 -- target/nmic.sh@28 -- # nmic_status=0 00:25:04.694 21:27:53 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:25:04.694 21:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.694 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.694 [2024-04-26 21:27:53.712511] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:25:04.694 [2024-04-26 21:27:53.712567] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:25:04.694 [2024-04-26 21:27:53.712575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:04.694 2024/04/26 21:27:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:04.694 request: 00:25:04.694 { 00:25:04.694 "method": "nvmf_subsystem_add_ns", 00:25:04.694 "params": { 00:25:04.694 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:25:04.694 "namespace": { 00:25:04.694 "bdev_name": "Malloc0", 00:25:04.694 "no_auto_visible": false 00:25:04.694 } 00:25:04.694 } 00:25:04.694 } 00:25:04.694 Got JSON-RPC error response 00:25:04.694 GoRPCClient: error on JSON-RPC call 00:25:04.694 21:27:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:04.694 21:27:53 -- target/nmic.sh@29 -- # nmic_status=1 00:25:04.694 21:27:53 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:25:04.694 Adding namespace failed - expected result. 00:25:04.694 21:27:53 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:25:04.694 test case2: host connect to nvmf target in multiple paths 00:25:04.694 21:27:53 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:25:04.694 21:27:53 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:04.694 21:27:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.694 21:27:53 -- common/autotest_common.sh@10 -- # set +x 00:25:04.694 [2024-04-26 21:27:53.724609] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:04.694 21:27:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.694 21:27:53 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:04.694 21:27:53 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:25:04.953 21:27:54 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:25:04.953 21:27:54 -- common/autotest_common.sh@1184 -- # local i=0 00:25:04.953 21:27:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.953 21:27:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:25:04.953 21:27:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:25:06.858 21:27:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:25:06.858 21:27:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:25:06.858 21:27:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:25:06.858 21:27:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:25:06.858 21:27:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.858 21:27:56 -- common/autotest_common.sh@1194 -- # return 0 00:25:06.858 21:27:56 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:06.858 [global] 00:25:06.858 thread=1 00:25:06.858 invalidate=1 00:25:06.858 rw=write 00:25:06.858 time_based=1 00:25:06.858 runtime=1 00:25:06.858 ioengine=libaio 00:25:06.858 direct=1 00:25:06.858 bs=4096 00:25:06.858 iodepth=1 00:25:06.858 norandommap=0 00:25:06.858 numjobs=1 00:25:06.858 00:25:06.858 verify_dump=1 00:25:06.858 verify_backlog=512 00:25:06.858 verify_state_save=0 00:25:06.858 do_verify=1 00:25:06.858 verify=crc32c-intel 00:25:06.858 [job0] 00:25:06.858 filename=/dev/nvme0n1 00:25:07.116 Could not set queue depth (nvme0n1) 00:25:07.116 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:07.116 fio-3.35 00:25:07.116 Starting 1 thread 00:25:08.494 00:25:08.494 job0: (groupid=0, jobs=1): err= 0: pid=91645: Fri Apr 26 21:27:57 2024 00:25:08.494 read: IOPS=3651, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1001msec) 00:25:08.494 slat (nsec): min=8767, max=38242, avg=10056.89, stdev=1269.18 00:25:08.494 clat (usec): min=103, max=224, avg=132.74, stdev=19.31 00:25:08.494 lat (usec): min=113, max=234, avg=142.80, stdev=19.33 00:25:08.494 clat percentiles (usec): 00:25:08.494 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 117], 00:25:08.494 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 127], 60.00th=[ 135], 00:25:08.494 | 70.00th=[ 143], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 169], 00:25:08.494 | 99.00th=[ 182], 99.50th=[ 184], 99.90th=[ 198], 99.95th=[ 210], 00:25:08.494 | 99.99th=[ 225] 00:25:08.494 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:25:08.494 slat (usec): min=12, max=155, avg=15.99, stdev= 6.57 00:25:08.494 clat (usec): min=60, max=1331, avg=98.60, stdev=39.40 00:25:08.494 lat (usec): min=88, max=1345, avg=114.60, stdev=40.72 00:25:08.494 clat percentiles (usec): 00:25:08.494 | 1.00th=[ 78], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:25:08.494 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 98], 00:25:08.494 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 116], 95.00th=[ 123], 00:25:08.494 | 99.00th=[ 151], 99.50th=[ 260], 99.90th=[ 668], 99.95th=[ 914], 00:25:08.494 | 99.99th=[ 1336] 00:25:08.494 bw ( KiB/s): min=16904, max=16904, per=100.00%, avg=16904.00, stdev= 0.00, samples=1 00:25:08.494 iops : min= 4226, max= 4226, avg=4226.00, stdev= 0.00, samples=1 00:25:08.494 lat (usec) : 100=33.58%, 250=66.13%, 500=0.19%, 750=0.04%, 1000=0.03% 00:25:08.494 lat (msec) : 2=0.03% 00:25:08.494 cpu : usr=1.30%, sys=7.80%, ctx=7754, majf=0, minf=2 00:25:08.494 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:08.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.494 issued rwts: total=3655,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.494 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:08.494 00:25:08.494 Run status group 0 (all jobs): 00:25:08.494 READ: bw=14.3MiB/s (15.0MB/s), 14.3MiB/s-14.3MiB/s (15.0MB/s-15.0MB/s), io=14.3MiB (15.0MB), run=1001-1001msec 00:25:08.494 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:25:08.494 00:25:08.494 Disk stats (read/write): 00:25:08.494 nvme0n1: ios=3480/3584, merge=0/0, ticks=488/375, in_queue=863, util=90.98% 00:25:08.494 21:27:57 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:08.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:25:08.494 21:27:57 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:08.494 21:27:57 -- common/autotest_common.sh@1205 -- # local i=0 00:25:08.495 21:27:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:08.495 21:27:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:08.495 21:27:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:08.495 21:27:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:08.495 21:27:57 -- common/autotest_common.sh@1217 -- # return 0 00:25:08.495 21:27:57 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:08.495 21:27:57 -- target/nmic.sh@53 -- # nvmftestfini 00:25:08.495 21:27:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:08.495 21:27:57 -- nvmf/common.sh@117 -- # sync 00:25:08.495 21:27:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.495 21:27:57 -- nvmf/common.sh@120 -- # set +e 00:25:08.495 21:27:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.495 21:27:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.495 rmmod nvme_tcp 00:25:08.495 rmmod nvme_fabrics 00:25:08.495 rmmod nvme_keyring 00:25:08.495 21:27:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.495 21:27:57 -- nvmf/common.sh@124 -- # set -e 00:25:08.495 21:27:57 -- nvmf/common.sh@125 -- # return 0 00:25:08.495 21:27:57 -- nvmf/common.sh@478 -- # '[' -n 91535 ']' 00:25:08.495 21:27:57 -- nvmf/common.sh@479 -- # killprocess 91535 00:25:08.495 21:27:57 -- common/autotest_common.sh@936 -- # '[' -z 91535 ']' 00:25:08.495 21:27:57 -- common/autotest_common.sh@940 -- # kill -0 91535 00:25:08.495 21:27:57 -- common/autotest_common.sh@941 -- # uname 00:25:08.495 21:27:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:08.495 21:27:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91535 00:25:08.495 21:27:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:08.495 killing process with pid 91535 00:25:08.495 21:27:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:08.495 21:27:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91535' 00:25:08.495 21:27:57 -- common/autotest_common.sh@955 -- # kill 91535 00:25:08.495 21:27:57 -- common/autotest_common.sh@960 -- # wait 91535 00:25:08.753 21:27:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:08.753 21:27:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:08.753 21:27:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:08.753 21:27:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.753 21:27:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:08.753 21:27:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.753 21:27:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.753 21:27:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.753 21:27:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:08.753 00:25:08.753 real 0m5.830s 00:25:08.753 user 0m19.682s 00:25:08.753 sys 0m1.264s 00:25:08.753 21:27:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:08.753 21:27:57 -- common/autotest_common.sh@10 -- # set +x 00:25:08.753 ************************************ 00:25:08.753 END TEST nvmf_nmic 00:25:08.753 ************************************ 00:25:08.753 21:27:57 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:25:08.753 21:27:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:08.753 21:27:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:08.753 21:27:57 -- common/autotest_common.sh@10 -- # set +x 00:25:09.012 ************************************ 00:25:09.012 START TEST nvmf_fio_target 00:25:09.012 ************************************ 00:25:09.012 21:27:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:25:09.012 * Looking for test storage... 00:25:09.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:09.012 21:27:58 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:09.012 21:27:58 -- nvmf/common.sh@7 -- # uname -s 00:25:09.012 21:27:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.012 21:27:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.012 21:27:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.012 21:27:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.012 21:27:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.012 21:27:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.012 21:27:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.012 21:27:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.012 21:27:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.012 21:27:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.012 21:27:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:25:09.012 21:27:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:25:09.012 21:27:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.012 21:27:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.012 21:27:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:09.012 21:27:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.012 21:27:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:09.012 21:27:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.012 21:27:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.012 21:27:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.012 21:27:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.012 21:27:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.012 21:27:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.012 21:27:58 -- paths/export.sh@5 -- # export PATH 00:25:09.012 21:27:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.012 21:27:58 -- nvmf/common.sh@47 -- # : 0 00:25:09.012 21:27:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:09.012 21:27:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:09.012 21:27:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.012 21:27:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.012 21:27:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.012 21:27:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:09.012 21:27:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:09.012 21:27:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:09.012 21:27:58 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:09.012 21:27:58 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:09.012 21:27:58 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:09.012 21:27:58 -- target/fio.sh@16 -- # nvmftestinit 00:25:09.012 21:27:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:09.012 21:27:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.012 21:27:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:09.012 21:27:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:09.012 21:27:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:09.012 21:27:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.012 21:27:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.012 21:27:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.012 21:27:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:09.012 21:27:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:09.012 21:27:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:09.012 21:27:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:09.012 21:27:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:09.012 21:27:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:09.012 21:27:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.012 21:27:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.012 21:27:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:09.012 21:27:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:09.012 21:27:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:09.012 21:27:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:09.012 21:27:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:09.012 21:27:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.012 21:27:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:09.012 21:27:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:09.012 21:27:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:09.012 21:27:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:09.012 21:27:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:09.012 21:27:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:09.012 Cannot find device "nvmf_tgt_br" 00:25:09.012 21:27:58 -- nvmf/common.sh@155 -- # true 00:25:09.012 21:27:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:09.012 Cannot find device "nvmf_tgt_br2" 00:25:09.012 21:27:58 -- nvmf/common.sh@156 -- # true 00:25:09.012 21:27:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:09.012 21:27:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:09.012 Cannot find device "nvmf_tgt_br" 00:25:09.012 21:27:58 -- nvmf/common.sh@158 -- # true 00:25:09.012 21:27:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:09.012 Cannot find device "nvmf_tgt_br2" 00:25:09.012 21:27:58 -- nvmf/common.sh@159 -- # true 00:25:09.012 21:27:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:09.012 21:27:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:09.271 21:27:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:09.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:09.271 21:27:58 -- nvmf/common.sh@162 -- # true 00:25:09.271 21:27:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:09.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:09.271 21:27:58 -- nvmf/common.sh@163 -- # true 00:25:09.271 21:27:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:09.271 21:27:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:09.271 21:27:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:09.271 21:27:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:09.271 21:27:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:09.271 21:27:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:09.271 21:27:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:09.271 21:27:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:09.271 21:27:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:09.271 21:27:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:09.271 21:27:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:09.271 21:27:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:09.271 21:27:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:09.271 21:27:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:09.271 21:27:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:09.271 21:27:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:09.271 21:27:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:09.271 21:27:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:09.271 21:27:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:09.271 21:27:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:09.271 21:27:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:09.271 21:27:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:09.271 21:27:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:09.271 21:27:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:09.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:25:09.271 00:25:09.271 --- 10.0.0.2 ping statistics --- 00:25:09.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.271 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:25:09.271 21:27:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:09.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:09.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:25:09.271 00:25:09.271 --- 10.0.0.3 ping statistics --- 00:25:09.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.271 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:09.271 21:27:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:09.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:25:09.271 00:25:09.271 --- 10.0.0.1 ping statistics --- 00:25:09.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.271 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:25:09.271 21:27:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.271 21:27:58 -- nvmf/common.sh@422 -- # return 0 00:25:09.271 21:27:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:09.271 21:27:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.271 21:27:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:09.271 21:27:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:09.271 21:27:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.271 21:27:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:09.271 21:27:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:09.271 21:27:58 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:25:09.271 21:27:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:09.271 21:27:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:09.271 21:27:58 -- common/autotest_common.sh@10 -- # set +x 00:25:09.271 21:27:58 -- nvmf/common.sh@470 -- # nvmfpid=91826 00:25:09.271 21:27:58 -- nvmf/common.sh@471 -- # waitforlisten 91826 00:25:09.271 21:27:58 -- common/autotest_common.sh@817 -- # '[' -z 91826 ']' 00:25:09.271 21:27:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.271 21:27:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:09.271 21:27:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.272 21:27:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:09.272 21:27:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:09.272 21:27:58 -- common/autotest_common.sh@10 -- # set +x 00:25:09.530 [2024-04-26 21:27:58.527706] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:09.530 [2024-04-26 21:27:58.527816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.530 [2024-04-26 21:27:58.667227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:09.530 [2024-04-26 21:27:58.742472] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.530 [2024-04-26 21:27:58.742549] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.530 [2024-04-26 21:27:58.742561] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.530 [2024-04-26 21:27:58.742571] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.530 [2024-04-26 21:27:58.742580] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.530 [2024-04-26 21:27:58.742691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.530 [2024-04-26 21:27:58.743218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:09.530 [2024-04-26 21:27:58.743284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:09.530 [2024-04-26 21:27:58.743294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.464 21:27:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:10.464 21:27:59 -- common/autotest_common.sh@850 -- # return 0 00:25:10.464 21:27:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:10.464 21:27:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:10.464 21:27:59 -- common/autotest_common.sh@10 -- # set +x 00:25:10.464 21:27:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.464 21:27:59 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:10.721 [2024-04-26 21:27:59.938528] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.980 21:27:59 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:11.256 21:28:00 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:25:11.257 21:28:00 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:11.515 21:28:00 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:25:11.515 21:28:00 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:11.773 21:28:00 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:25:11.773 21:28:00 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:12.031 21:28:01 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:25:12.031 21:28:01 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:25:12.289 21:28:01 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:12.853 21:28:01 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:25:12.853 21:28:01 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:13.110 21:28:02 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:25:13.110 21:28:02 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:13.377 21:28:02 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:25:13.377 21:28:02 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:25:13.945 21:28:02 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:13.945 21:28:03 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:13.945 21:28:03 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.539 21:28:03 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:14.539 21:28:03 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:14.539 21:28:03 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.798 [2024-04-26 21:28:04.024713] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.056 21:28:04 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:25:15.056 21:28:04 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:25:15.314 21:28:04 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:15.573 21:28:04 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:25:15.573 21:28:04 -- common/autotest_common.sh@1184 -- # local i=0 00:25:15.573 21:28:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.573 21:28:04 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:25:15.573 21:28:04 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:25:15.573 21:28:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:25:17.473 21:28:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:25:17.473 21:28:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:25:17.473 21:28:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:25:17.473 21:28:06 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:25:17.473 21:28:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.748 21:28:06 -- common/autotest_common.sh@1194 -- # return 0 00:25:17.748 21:28:06 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:17.748 [global] 00:25:17.748 thread=1 00:25:17.748 invalidate=1 00:25:17.748 rw=write 00:25:17.748 time_based=1 00:25:17.748 runtime=1 00:25:17.748 ioengine=libaio 00:25:17.748 direct=1 00:25:17.748 bs=4096 00:25:17.748 iodepth=1 00:25:17.748 norandommap=0 00:25:17.748 numjobs=1 00:25:17.748 00:25:17.748 verify_dump=1 00:25:17.748 verify_backlog=512 00:25:17.748 verify_state_save=0 00:25:17.748 do_verify=1 00:25:17.748 verify=crc32c-intel 00:25:17.748 [job0] 00:25:17.748 filename=/dev/nvme0n1 00:25:17.748 [job1] 00:25:17.748 filename=/dev/nvme0n2 00:25:17.748 [job2] 00:25:17.748 filename=/dev/nvme0n3 00:25:17.748 [job3] 00:25:17.748 filename=/dev/nvme0n4 00:25:17.748 Could not set queue depth (nvme0n1) 00:25:17.748 Could not set queue depth (nvme0n2) 00:25:17.748 Could not set queue depth (nvme0n3) 00:25:17.748 Could not set queue depth (nvme0n4) 00:25:17.748 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:17.748 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:17.748 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:17.748 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:17.748 fio-3.35 00:25:17.748 Starting 4 threads 00:25:19.173 00:25:19.173 job0: (groupid=0, jobs=1): err= 0: pid=92129: Fri Apr 26 21:28:08 2024 00:25:19.173 read: IOPS=1795, BW=7181KiB/s (7353kB/s)(7188KiB/1001msec) 00:25:19.173 slat (nsec): min=9073, max=96468, avg=18632.05, stdev=6992.26 00:25:19.173 clat (usec): min=150, max=5440, avg=267.53, stdev=142.54 00:25:19.173 lat (usec): min=176, max=5471, avg=286.17, stdev=142.74 00:25:19.173 clat percentiles (usec): 00:25:19.173 | 1.00th=[ 212], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:25:19.173 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 265], 00:25:19.173 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:25:19.173 | 99.00th=[ 310], 99.50th=[ 347], 99.90th=[ 3261], 99.95th=[ 5473], 00:25:19.173 | 99.99th=[ 5473] 00:25:19.173 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:25:19.173 slat (usec): min=13, max=155, avg=24.68, stdev= 9.82 00:25:19.173 clat (usec): min=112, max=946, avg=208.68, stdev=25.56 00:25:19.173 lat (usec): min=133, max=968, avg=233.35, stdev=23.86 00:25:19.173 clat percentiles (usec): 00:25:19.173 | 1.00th=[ 157], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 194], 00:25:19.173 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:25:19.173 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 239], 00:25:19.173 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 359], 00:25:19.173 | 99.99th=[ 947] 00:25:19.173 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:25:19.173 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:19.173 lat (usec) : 250=62.13%, 500=37.79%, 1000=0.03% 00:25:19.173 lat (msec) : 4=0.03%, 10=0.03% 00:25:19.173 cpu : usr=0.90%, sys=6.80%, ctx=3856, majf=0, minf=6 00:25:19.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:19.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.173 issued rwts: total=1797,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:19.173 job1: (groupid=0, jobs=1): err= 0: pid=92130: Fri Apr 26 21:28:08 2024 00:25:19.173 read: IOPS=1830, BW=7321KiB/s (7496kB/s)(7328KiB/1001msec) 00:25:19.173 slat (nsec): min=8473, max=43294, avg=11158.53, stdev=3678.66 00:25:19.173 clat (usec): min=127, max=1063, avg=270.79, stdev=36.01 00:25:19.173 lat (usec): min=136, max=1078, avg=281.95, stdev=36.42 00:25:19.173 clat percentiles (usec): 00:25:19.173 | 1.00th=[ 143], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 260], 00:25:19.173 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 273], 00:25:19.173 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:25:19.173 | 99.00th=[ 367], 99.50th=[ 396], 99.90th=[ 816], 99.95th=[ 1057], 00:25:19.173 | 99.99th=[ 1057] 00:25:19.173 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:25:19.173 slat (usec): min=13, max=148, avg=24.09, stdev= 9.52 00:25:19.173 clat (usec): min=112, max=389, avg=208.99, stdev=19.57 00:25:19.173 lat (usec): min=133, max=529, avg=233.08, stdev=18.76 00:25:19.173 clat percentiles (usec): 00:25:19.173 | 1.00th=[ 155], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 196], 00:25:19.173 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:25:19.173 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 241], 00:25:19.173 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 281], 99.95th=[ 347], 00:25:19.173 | 99.99th=[ 392] 00:25:19.173 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:25:19.173 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:19.173 lat (usec) : 250=54.41%, 500=45.52%, 750=0.03%, 1000=0.03% 00:25:19.173 lat (msec) : 2=0.03% 00:25:19.173 cpu : usr=1.60%, sys=4.70%, ctx=3881, majf=0, minf=11 00:25:19.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:19.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.173 issued rwts: total=1832,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:19.173 job2: (groupid=0, jobs=1): err= 0: pid=92131: Fri Apr 26 21:28:08 2024 00:25:19.173 read: IOPS=2848, BW=11.1MiB/s (11.7MB/s)(11.1MiB/1001msec) 00:25:19.173 slat (nsec): min=8267, max=39237, avg=12139.65, stdev=3654.01 00:25:19.173 clat (usec): min=138, max=594, avg=167.00, stdev=16.28 00:25:19.173 lat (usec): min=147, max=606, avg=179.14, stdev=16.95 00:25:19.173 clat percentiles (usec): 00:25:19.173 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:25:19.173 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:25:19.173 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 188], 00:25:19.173 | 99.00th=[ 204], 99.50th=[ 219], 99.90th=[ 400], 99.95th=[ 433], 00:25:19.173 | 99.99th=[ 594] 00:25:19.173 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:25:19.173 slat (usec): min=12, max=149, avg=19.19, stdev= 8.64 00:25:19.173 clat (usec): min=109, max=267, avg=137.25, stdev=12.35 00:25:19.173 lat (usec): min=123, max=417, avg=156.44, stdev=16.44 00:25:19.174 clat percentiles (usec): 00:25:19.174 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 128], 00:25:19.174 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:25:19.174 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 159], 00:25:19.174 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 200], 99.95th=[ 229], 00:25:19.174 | 99.99th=[ 269] 00:25:19.174 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:25:19.174 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:19.174 lat (usec) : 250=99.85%, 500=0.14%, 750=0.02% 00:25:19.174 cpu : usr=1.60%, sys=7.00%, ctx=5924, majf=0, minf=7 00:25:19.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:19.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.174 issued rwts: total=2851,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:19.174 job3: (groupid=0, jobs=1): err= 0: pid=92132: Fri Apr 26 21:28:08 2024 00:25:19.174 read: IOPS=2946, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:25:19.174 slat (nsec): min=8625, max=32130, avg=10063.66, stdev=1871.77 00:25:19.174 clat (usec): min=139, max=1695, avg=167.85, stdev=38.96 00:25:19.174 lat (usec): min=149, max=1714, avg=177.91, stdev=39.28 00:25:19.174 clat percentiles (usec): 00:25:19.174 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:25:19.174 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:25:19.174 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 190], 00:25:19.174 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 898], 99.95th=[ 1205], 00:25:19.174 | 99.99th=[ 1696] 00:25:19.174 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:25:19.174 slat (usec): min=12, max=121, avg=16.10, stdev= 6.52 00:25:19.174 clat (usec): min=106, max=295, avg=136.19, stdev=12.08 00:25:19.174 lat (usec): min=119, max=371, avg=152.28, stdev=14.96 00:25:19.174 clat percentiles (usec): 00:25:19.174 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:25:19.174 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:25:19.174 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:25:19.174 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 221], 99.95th=[ 251], 00:25:19.174 | 99.99th=[ 297] 00:25:19.174 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:25:19.174 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:19.174 lat (usec) : 250=99.85%, 500=0.10%, 1000=0.02% 00:25:19.174 lat (msec) : 2=0.03% 00:25:19.174 cpu : usr=1.30%, sys=5.90%, ctx=6021, majf=0, minf=11 00:25:19.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:19.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.174 issued rwts: total=2949,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:19.174 00:25:19.174 Run status group 0 (all jobs): 00:25:19.174 READ: bw=36.8MiB/s (38.6MB/s), 7181KiB/s-11.5MiB/s (7353kB/s-12.1MB/s), io=36.8MiB (38.6MB), run=1001-1001msec 00:25:19.174 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:25:19.174 00:25:19.174 Disk stats (read/write): 00:25:19.174 nvme0n1: ios=1586/1876, merge=0/0, ticks=458/417, in_queue=875, util=89.78% 00:25:19.174 nvme0n2: ios=1585/1878, merge=0/0, ticks=455/415, in_queue=870, util=90.22% 00:25:19.174 nvme0n3: ios=2590/2644, merge=0/0, ticks=466/379, in_queue=845, util=90.01% 00:25:19.174 nvme0n4: ios=2596/2754, merge=0/0, ticks=467/388, in_queue=855, util=90.48% 00:25:19.174 21:28:08 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:25:19.174 [global] 00:25:19.174 thread=1 00:25:19.174 invalidate=1 00:25:19.174 rw=randwrite 00:25:19.174 time_based=1 00:25:19.174 runtime=1 00:25:19.174 ioengine=libaio 00:25:19.174 direct=1 00:25:19.174 bs=4096 00:25:19.174 iodepth=1 00:25:19.174 norandommap=0 00:25:19.174 numjobs=1 00:25:19.174 00:25:19.174 verify_dump=1 00:25:19.174 verify_backlog=512 00:25:19.174 verify_state_save=0 00:25:19.174 do_verify=1 00:25:19.174 verify=crc32c-intel 00:25:19.174 [job0] 00:25:19.174 filename=/dev/nvme0n1 00:25:19.174 [job1] 00:25:19.174 filename=/dev/nvme0n2 00:25:19.174 [job2] 00:25:19.174 filename=/dev/nvme0n3 00:25:19.174 [job3] 00:25:19.174 filename=/dev/nvme0n4 00:25:19.174 Could not set queue depth (nvme0n1) 00:25:19.174 Could not set queue depth (nvme0n2) 00:25:19.174 Could not set queue depth (nvme0n3) 00:25:19.174 Could not set queue depth (nvme0n4) 00:25:19.174 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:19.174 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:19.174 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:19.174 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:19.174 fio-3.35 00:25:19.174 Starting 4 threads 00:25:20.551 00:25:20.551 job0: (groupid=0, jobs=1): err= 0: pid=92192: Fri Apr 26 21:28:09 2024 00:25:20.551 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:25:20.551 slat (nsec): min=7473, max=48823, avg=12258.31, stdev=3344.84 00:25:20.551 clat (usec): min=136, max=475, avg=246.97, stdev=56.77 00:25:20.551 lat (usec): min=148, max=491, avg=259.23, stdev=56.92 00:25:20.551 clat percentiles (usec): 00:25:20.551 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:25:20.551 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:25:20.551 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 347], 00:25:20.551 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 429], 99.95th=[ 449], 00:25:20.551 | 99.99th=[ 478] 00:25:20.551 write: IOPS=2082, BW=8332KiB/s (8532kB/s)(8340KiB/1001msec); 0 zone resets 00:25:20.551 slat (usec): min=12, max=222, avg=26.15, stdev=11.02 00:25:20.551 clat (usec): min=98, max=2312, avg=195.35, stdev=66.26 00:25:20.551 lat (usec): min=117, max=2331, avg=221.50, stdev=67.10 00:25:20.551 clat percentiles (usec): 00:25:20.551 | 1.00th=[ 110], 5.00th=[ 119], 10.00th=[ 124], 20.00th=[ 135], 00:25:20.551 | 30.00th=[ 176], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 217], 00:25:20.551 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 241], 95.00th=[ 249], 00:25:20.551 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 529], 99.95th=[ 537], 00:25:20.551 | 99.99th=[ 2311] 00:25:20.551 bw ( KiB/s): min= 8800, max= 8800, per=22.41%, avg=8800.00, stdev= 0.00, samples=1 00:25:20.551 iops : min= 2200, max= 2200, avg=2200.00, stdev= 0.00, samples=1 00:25:20.551 lat (usec) : 100=0.02%, 250=66.90%, 500=33.00%, 750=0.05% 00:25:20.551 lat (msec) : 4=0.02% 00:25:20.551 cpu : usr=1.30%, sys=6.00%, ctx=4136, majf=0, minf=9 00:25:20.551 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:20.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.551 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.551 issued rwts: total=2048,2085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.551 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:20.551 job1: (groupid=0, jobs=1): err= 0: pid=92193: Fri Apr 26 21:28:09 2024 00:25:20.551 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:25:20.551 slat (nsec): min=8428, max=84222, avg=12835.63, stdev=6113.08 00:25:20.551 clat (usec): min=127, max=1737, avg=162.51, stdev=34.58 00:25:20.551 lat (usec): min=137, max=1750, avg=175.34, stdev=35.96 00:25:20.551 clat percentiles (usec): 00:25:20.551 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:25:20.551 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:25:20.551 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 200], 00:25:20.552 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 260], 99.95th=[ 603], 00:25:20.552 | 99.99th=[ 1745] 00:25:20.552 write: IOPS=3157, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:25:20.552 slat (usec): min=12, max=167, avg=16.55, stdev= 7.42 00:25:20.552 clat (usec): min=85, max=1169, avg=126.46, stdev=25.41 00:25:20.552 lat (usec): min=98, max=1187, avg=143.02, stdev=26.97 00:25:20.552 clat percentiles (usec): 00:25:20.552 | 1.00th=[ 100], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 114], 00:25:20.552 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 128], 00:25:20.552 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 153], 00:25:20.552 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 243], 99.95th=[ 570], 00:25:20.552 | 99.99th=[ 1172] 00:25:20.552 bw ( KiB/s): min=13224, max=13224, per=33.68%, avg=13224.00, stdev= 0.00, samples=1 00:25:20.552 iops : min= 3306, max= 3306, avg=3306.00, stdev= 0.00, samples=1 00:25:20.552 lat (usec) : 100=0.47%, 250=99.42%, 500=0.05%, 750=0.03% 00:25:20.552 lat (msec) : 2=0.03% 00:25:20.552 cpu : usr=1.80%, sys=6.70%, ctx=6234, majf=0, minf=9 00:25:20.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.552 issued rwts: total=3072,3161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:20.552 job2: (groupid=0, jobs=1): err= 0: pid=92194: Fri Apr 26 21:28:09 2024 00:25:20.552 read: IOPS=1528, BW=6114KiB/s (6261kB/s)(6120KiB/1001msec) 00:25:20.552 slat (nsec): min=6367, max=50556, avg=18683.16, stdev=5586.09 00:25:20.552 clat (usec): min=193, max=543, avg=365.62, stdev=56.18 00:25:20.552 lat (usec): min=216, max=567, avg=384.30, stdev=58.61 00:25:20.552 clat percentiles (usec): 00:25:20.552 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 297], 00:25:20.552 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 388], 00:25:20.552 | 70.00th=[ 400], 80.00th=[ 416], 90.00th=[ 433], 95.00th=[ 441], 00:25:20.552 | 99.00th=[ 469], 99.50th=[ 478], 99.90th=[ 498], 99.95th=[ 545], 00:25:20.552 | 99.99th=[ 545] 00:25:20.552 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:25:20.552 slat (usec): min=12, max=151, avg=32.14, stdev= 8.98 00:25:20.552 clat (usec): min=118, max=3950, avg=230.82, stdev=160.51 00:25:20.552 lat (usec): min=152, max=3990, avg=262.96, stdev=160.60 00:25:20.552 clat percentiles (usec): 00:25:20.552 | 1.00th=[ 141], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 204], 00:25:20.552 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:25:20.552 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 262], 00:25:20.552 | 99.00th=[ 285], 99.50th=[ 326], 99.90th=[ 3687], 99.95th=[ 3949], 00:25:20.552 | 99.99th=[ 3949] 00:25:20.552 bw ( KiB/s): min= 8192, max= 8192, per=20.86%, avg=8192.00, stdev= 0.00, samples=1 00:25:20.552 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:20.552 lat (usec) : 250=44.29%, 500=55.45%, 750=0.07%, 1000=0.03% 00:25:20.552 lat (msec) : 2=0.07%, 4=0.10% 00:25:20.552 cpu : usr=1.00%, sys=6.50%, ctx=3069, majf=0, minf=14 00:25:20.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.552 issued rwts: total=1530,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:20.552 job3: (groupid=0, jobs=1): err= 0: pid=92195: Fri Apr 26 21:28:09 2024 00:25:20.552 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:25:20.552 slat (nsec): min=8620, max=39533, avg=14469.22, stdev=5311.84 00:25:20.552 clat (usec): min=142, max=485, avg=179.71, stdev=16.47 00:25:20.552 lat (usec): min=152, max=496, avg=194.18, stdev=17.82 00:25:20.552 clat percentiles (usec): 00:25:20.552 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:25:20.552 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:25:20.552 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:25:20.552 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 343], 99.95th=[ 375], 00:25:20.552 | 99.99th=[ 486] 00:25:20.552 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:25:20.552 slat (usec): min=12, max=152, avg=20.47, stdev= 9.99 00:25:20.552 clat (usec): min=108, max=500, avg=141.69, stdev=16.26 00:25:20.552 lat (usec): min=123, max=526, avg=162.16, stdev=20.58 00:25:20.552 clat percentiles (usec): 00:25:20.552 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:25:20.552 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:25:20.552 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:25:20.552 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 265], 99.95th=[ 424], 00:25:20.552 | 99.99th=[ 502] 00:25:20.552 bw ( KiB/s): min=12288, max=12288, per=31.29%, avg=12288.00, stdev= 0.00, samples=1 00:25:20.552 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:20.552 lat (usec) : 250=99.82%, 500=0.16%, 750=0.02% 00:25:20.552 cpu : usr=1.60%, sys=7.40%, ctx=5607, majf=0, minf=13 00:25:20.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:20.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.552 issued rwts: total=2560,3045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:20.552 00:25:20.552 Run status group 0 (all jobs): 00:25:20.552 READ: bw=35.9MiB/s (37.7MB/s), 6114KiB/s-12.0MiB/s (6261kB/s-12.6MB/s), io=36.0MiB (37.7MB), run=1001-1001msec 00:25:20.552 WRITE: bw=38.3MiB/s (40.2MB/s), 6138KiB/s-12.3MiB/s (6285kB/s-12.9MB/s), io=38.4MiB (40.3MB), run=1001-1001msec 00:25:20.552 00:25:20.552 Disk stats (read/write): 00:25:20.552 nvme0n1: ios=1719/2048, merge=0/0, ticks=423/427, in_queue=850, util=89.08% 00:25:20.552 nvme0n2: ios=2609/2936, merge=0/0, ticks=440/389, in_queue=829, util=89.93% 00:25:20.552 nvme0n3: ios=1121/1536, merge=0/0, ticks=439/362, in_queue=801, util=88.82% 00:25:20.552 nvme0n4: ios=2304/2560, merge=0/0, ticks=426/394, in_queue=820, util=89.81% 00:25:20.552 21:28:09 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:25:20.552 [global] 00:25:20.552 thread=1 00:25:20.552 invalidate=1 00:25:20.552 rw=write 00:25:20.552 time_based=1 00:25:20.552 runtime=1 00:25:20.552 ioengine=libaio 00:25:20.552 direct=1 00:25:20.552 bs=4096 00:25:20.552 iodepth=128 00:25:20.552 norandommap=0 00:25:20.552 numjobs=1 00:25:20.552 00:25:20.552 verify_dump=1 00:25:20.552 verify_backlog=512 00:25:20.552 verify_state_save=0 00:25:20.552 do_verify=1 00:25:20.552 verify=crc32c-intel 00:25:20.552 [job0] 00:25:20.552 filename=/dev/nvme0n1 00:25:20.552 [job1] 00:25:20.552 filename=/dev/nvme0n2 00:25:20.552 [job2] 00:25:20.552 filename=/dev/nvme0n3 00:25:20.552 [job3] 00:25:20.552 filename=/dev/nvme0n4 00:25:20.552 Could not set queue depth (nvme0n1) 00:25:20.552 Could not set queue depth (nvme0n2) 00:25:20.552 Could not set queue depth (nvme0n3) 00:25:20.552 Could not set queue depth (nvme0n4) 00:25:20.552 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:20.552 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:20.552 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:20.552 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:20.552 fio-3.35 00:25:20.552 Starting 4 threads 00:25:21.930 00:25:21.930 job0: (groupid=0, jobs=1): err= 0: pid=92249: Fri Apr 26 21:28:10 2024 00:25:21.930 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:25:21.930 slat (usec): min=6, max=21742, avg=259.77, stdev=1444.91 00:25:21.930 clat (usec): min=15392, max=63979, avg=33752.51, stdev=9820.39 00:25:21.930 lat (usec): min=19638, max=64005, avg=34012.28, stdev=9798.81 00:25:21.930 clat percentiles (usec): 00:25:21.930 | 1.00th=[19792], 5.00th=[24511], 10.00th=[25297], 20.00th=[26608], 00:25:21.930 | 30.00th=[26870], 40.00th=[29230], 50.00th=[31327], 60.00th=[33424], 00:25:21.930 | 70.00th=[35914], 80.00th=[39060], 90.00th=[46924], 95.00th=[60556], 00:25:21.930 | 99.00th=[63701], 99.50th=[63701], 99.90th=[63701], 99.95th=[64226], 00:25:21.930 | 99.99th=[64226] 00:25:21.930 write: IOPS=2408, BW=9635KiB/s (9866kB/s)(9664KiB/1003msec); 0 zone resets 00:25:21.930 slat (usec): min=12, max=11178, avg=184.44, stdev=889.83 00:25:21.930 clat (usec): min=2356, max=46109, avg=23285.51, stdev=6941.23 00:25:21.930 lat (usec): min=2393, max=46161, avg=23469.95, stdev=6925.33 00:25:21.930 clat percentiles (usec): 00:25:21.930 | 1.00th=[ 6980], 5.00th=[17433], 10.00th=[18220], 20.00th=[18744], 00:25:21.930 | 30.00th=[19006], 40.00th=[19268], 50.00th=[20317], 60.00th=[22938], 00:25:21.930 | 70.00th=[26346], 80.00th=[29492], 90.00th=[31851], 95.00th=[36439], 00:25:21.930 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:25:21.930 | 99.99th=[45876] 00:25:21.930 bw ( KiB/s): min= 8208, max=10048, per=15.52%, avg=9128.00, stdev=1301.08, samples=2 00:25:21.930 iops : min= 2052, max= 2512, avg=2282.00, stdev=325.27, samples=2 00:25:21.930 lat (msec) : 4=0.36%, 10=0.72%, 20=26.25%, 50=68.48%, 100=4.19% 00:25:21.930 cpu : usr=1.90%, sys=9.68%, ctx=145, majf=0, minf=15 00:25:21.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:21.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:21.930 issued rwts: total=2048,2416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:21.930 job1: (groupid=0, jobs=1): err= 0: pid=92250: Fri Apr 26 21:28:10 2024 00:25:21.930 read: IOPS=4951, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1004msec) 00:25:21.930 slat (usec): min=3, max=5591, avg=97.88, stdev=476.61 00:25:21.930 clat (usec): min=1294, max=18211, avg=12768.69, stdev=1664.95 00:25:21.930 lat (usec): min=4019, max=18226, avg=12866.57, stdev=1696.86 00:25:21.930 clat percentiles (usec): 00:25:21.930 | 1.00th=[ 7504], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11863], 00:25:21.930 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:25:21.930 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14615], 95.00th=[15533], 00:25:21.930 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:25:21.930 | 99.99th=[18220] 00:25:21.930 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:25:21.930 slat (usec): min=7, max=5784, avg=91.89, stdev=400.31 00:25:21.930 clat (usec): min=7214, max=18384, avg=12394.66, stdev=1430.15 00:25:21.930 lat (usec): min=7268, max=18510, avg=12486.55, stdev=1449.18 00:25:21.930 clat percentiles (usec): 00:25:21.930 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11731], 00:25:21.930 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:25:21.930 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[14222], 00:25:21.930 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:25:21.930 | 99.99th=[18482] 00:25:21.930 bw ( KiB/s): min=20480, max=20480, per=34.82%, avg=20480.00, stdev= 0.00, samples=2 00:25:21.930 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:25:21.930 lat (msec) : 2=0.01%, 10=6.06%, 20=93.93% 00:25:21.930 cpu : usr=4.49%, sys=17.95%, ctx=538, majf=0, minf=11 00:25:21.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:21.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:21.930 issued rwts: total=4971,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:21.930 job2: (groupid=0, jobs=1): err= 0: pid=92251: Fri Apr 26 21:28:10 2024 00:25:21.930 read: IOPS=4093, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1003msec) 00:25:21.930 slat (usec): min=4, max=3586, avg=112.41, stdev=492.67 00:25:21.930 clat (usec): min=2114, max=18267, avg=14783.17, stdev=1245.92 00:25:21.930 lat (usec): min=2145, max=18276, avg=14895.58, stdev=1192.35 00:25:21.930 clat percentiles (usec): 00:25:21.930 | 1.00th=[11863], 5.00th=[12911], 10.00th=[13173], 20.00th=[13960], 00:25:21.930 | 30.00th=[14353], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:25:21.930 | 70.00th=[15270], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:25:21.930 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[17695], 00:25:21.930 | 99.99th=[18220] 00:25:21.930 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:25:21.930 slat (usec): min=7, max=3762, avg=110.12, stdev=448.20 00:25:21.930 clat (usec): min=2505, max=17354, avg=14299.13, stdev=1453.25 00:25:21.930 lat (usec): min=2567, max=17389, avg=14409.26, stdev=1401.47 00:25:21.930 clat percentiles (usec): 00:25:21.930 | 1.00th=[ 7504], 5.00th=[12649], 10.00th=[13173], 20.00th=[13435], 00:25:21.930 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:25:21.930 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:25:21.930 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17171], 99.95th=[17433], 00:25:21.930 | 99.99th=[17433] 00:25:21.930 bw ( KiB/s): min=17816, max=18148, per=30.57%, avg=17982.00, stdev=234.76, samples=2 00:25:21.930 iops : min= 4454, max= 4537, avg=4495.50, stdev=58.69, samples=2 00:25:21.930 lat (msec) : 4=0.34%, 10=0.52%, 20=99.14% 00:25:21.930 cpu : usr=3.09%, sys=12.48%, ctx=518, majf=0, minf=7 00:25:21.930 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:21.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:21.930 issued rwts: total=4106,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.930 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:21.930 job3: (groupid=0, jobs=1): err= 0: pid=92252: Fri Apr 26 21:28:10 2024 00:25:21.930 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:25:21.930 slat (usec): min=4, max=10463, avg=177.67, stdev=907.60 00:25:21.930 clat (usec): min=15177, max=36423, avg=22542.38, stdev=3100.18 00:25:21.930 lat (usec): min=15188, max=36449, avg=22720.05, stdev=3186.12 00:25:21.930 clat percentiles (usec): 00:25:21.930 | 1.00th=[15926], 5.00th=[17695], 10.00th=[19006], 20.00th=[20055], 00:25:21.930 | 30.00th=[20579], 40.00th=[21365], 50.00th=[22414], 60.00th=[22938], 00:25:21.931 | 70.00th=[23725], 80.00th=[25297], 90.00th=[26870], 95.00th=[28967], 00:25:21.931 | 99.00th=[30016], 99.50th=[30278], 99.90th=[33817], 99.95th=[36439], 00:25:21.931 | 99.99th=[36439] 00:25:21.931 write: IOPS=2634, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1006msec); 0 zone resets 00:25:21.931 slat (usec): min=10, max=7117, avg=200.24, stdev=791.54 00:25:21.931 clat (usec): min=4143, max=41266, avg=26098.07, stdev=7289.77 00:25:21.931 lat (usec): min=5063, max=41283, avg=26298.32, stdev=7333.47 00:25:21.931 clat percentiles (usec): 00:25:21.931 | 1.00th=[13435], 5.00th=[16909], 10.00th=[17171], 20.00th=[19006], 00:25:21.931 | 30.00th=[20055], 40.00th=[22938], 50.00th=[26870], 60.00th=[27132], 00:25:21.931 | 70.00th=[29492], 80.00th=[33424], 90.00th=[37487], 95.00th=[38011], 00:25:21.931 | 99.00th=[38536], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:25:21.931 | 99.99th=[41157] 00:25:21.931 bw ( KiB/s): min= 8192, max=12312, per=17.43%, avg=10252.00, stdev=2913.28, samples=2 00:25:21.931 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:25:21.931 lat (msec) : 10=0.48%, 20=24.63%, 50=74.89% 00:25:21.931 cpu : usr=1.39%, sys=6.07%, ctx=336, majf=0, minf=17 00:25:21.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:21.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:21.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:21.931 issued rwts: total=2560,2650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:21.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:21.931 00:25:21.931 Run status group 0 (all jobs): 00:25:21.931 READ: bw=53.1MiB/s (55.7MB/s), 8167KiB/s-19.3MiB/s (8364kB/s-20.3MB/s), io=53.5MiB (56.1MB), run=1003-1006msec 00:25:21.931 WRITE: bw=57.4MiB/s (60.2MB/s), 9635KiB/s-19.9MiB/s (9866kB/s-20.9MB/s), io=57.8MiB (60.6MB), run=1003-1006msec 00:25:21.931 00:25:21.931 Disk stats (read/write): 00:25:21.931 nvme0n1: ios=1874/2048, merge=0/0, ticks=15135/10370, in_queue=25505, util=89.68% 00:25:21.931 nvme0n2: ios=4263/4608, merge=0/0, ticks=25220/24305, in_queue=49525, util=89.93% 00:25:21.931 nvme0n3: ios=3605/4055, merge=0/0, ticks=12767/13117, in_queue=25884, util=89.93% 00:25:21.931 nvme0n4: ios=2048/2519, merge=0/0, ticks=14377/21218, in_queue=35595, util=89.88% 00:25:21.931 21:28:10 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:25:21.931 [global] 00:25:21.931 thread=1 00:25:21.931 invalidate=1 00:25:21.931 rw=randwrite 00:25:21.931 time_based=1 00:25:21.931 runtime=1 00:25:21.931 ioengine=libaio 00:25:21.931 direct=1 00:25:21.931 bs=4096 00:25:21.931 iodepth=128 00:25:21.931 norandommap=0 00:25:21.931 numjobs=1 00:25:21.931 00:25:21.931 verify_dump=1 00:25:21.931 verify_backlog=512 00:25:21.931 verify_state_save=0 00:25:21.931 do_verify=1 00:25:21.931 verify=crc32c-intel 00:25:21.931 [job0] 00:25:21.931 filename=/dev/nvme0n1 00:25:21.931 [job1] 00:25:21.931 filename=/dev/nvme0n2 00:25:21.931 [job2] 00:25:21.931 filename=/dev/nvme0n3 00:25:21.931 [job3] 00:25:21.931 filename=/dev/nvme0n4 00:25:21.931 Could not set queue depth (nvme0n1) 00:25:21.931 Could not set queue depth (nvme0n2) 00:25:21.931 Could not set queue depth (nvme0n3) 00:25:21.931 Could not set queue depth (nvme0n4) 00:25:21.931 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:21.931 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:21.931 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:21.931 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:21.931 fio-3.35 00:25:21.931 Starting 4 threads 00:25:22.939 00:25:22.939 job0: (groupid=0, jobs=1): err= 0: pid=92305: Fri Apr 26 21:28:12 2024 00:25:22.939 read: IOPS=2123, BW=8495KiB/s (8698kB/s)(8520KiB/1003msec) 00:25:22.939 slat (usec): min=3, max=8503, avg=177.36, stdev=901.49 00:25:22.939 clat (usec): min=667, max=37947, avg=21028.94, stdev=4298.31 00:25:22.939 lat (usec): min=7624, max=37970, avg=21206.30, stdev=4361.36 00:25:22.939 clat percentiles (usec): 00:25:22.939 | 1.00th=[ 7832], 5.00th=[16057], 10.00th=[16581], 20.00th=[18744], 00:25:22.939 | 30.00th=[19006], 40.00th=[19530], 50.00th=[19792], 60.00th=[21365], 00:25:22.939 | 70.00th=[22676], 80.00th=[23462], 90.00th=[26346], 95.00th=[29754], 00:25:22.939 | 99.00th=[32637], 99.50th=[32900], 99.90th=[38011], 99.95th=[38011], 00:25:22.939 | 99.99th=[38011] 00:25:22.939 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:25:22.939 slat (usec): min=9, max=7171, avg=239.27, stdev=854.77 00:25:22.939 clat (usec): min=15220, max=50206, avg=31993.56, stdev=7882.61 00:25:22.939 lat (usec): min=15235, max=50223, avg=32232.83, stdev=7927.49 00:25:22.939 clat percentiles (usec): 00:25:22.939 | 1.00th=[15270], 5.00th=[18220], 10.00th=[19530], 20.00th=[25297], 00:25:22.939 | 30.00th=[28705], 40.00th=[30540], 50.00th=[32113], 60.00th=[33817], 00:25:22.939 | 70.00th=[36439], 80.00th=[38536], 90.00th=[41681], 95.00th=[45876], 00:25:22.939 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:25:22.939 | 99.99th=[50070] 00:25:22.939 bw ( KiB/s): min= 9856, max=10235, per=17.08%, avg=10045.50, stdev=267.99, samples=2 00:25:22.939 iops : min= 2464, max= 2558, avg=2511.00, stdev=66.47, samples=2 00:25:22.939 lat (usec) : 750=0.02% 00:25:22.939 lat (msec) : 10=0.85%, 20=29.04%, 50=69.51%, 100=0.58% 00:25:22.939 cpu : usr=1.20%, sys=4.49%, ctx=351, majf=0, minf=11 00:25:22.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:22.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:22.939 issued rwts: total=2130,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:22.939 job1: (groupid=0, jobs=1): err= 0: pid=92306: Fri Apr 26 21:28:12 2024 00:25:22.939 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:25:22.939 slat (usec): min=5, max=3048, avg=93.61, stdev=389.14 00:25:22.939 clat (usec): min=9382, max=15247, avg=12417.21, stdev=969.22 00:25:22.939 lat (usec): min=9612, max=16487, avg=12510.82, stdev=944.23 00:25:22.939 clat percentiles (usec): 00:25:22.939 | 1.00th=[ 9896], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:25:22.939 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:25:22.939 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13829], 95.00th=[14091], 00:25:22.939 | 99.00th=[14615], 99.50th=[14746], 99.90th=[15270], 99.95th=[15270], 00:25:22.939 | 99.99th=[15270] 00:25:22.939 write: IOPS=5330, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1004msec); 0 zone resets 00:25:22.939 slat (usec): min=7, max=3799, avg=91.36, stdev=379.40 00:25:22.939 clat (usec): min=1296, max=15888, avg=11809.61, stdev=1176.65 00:25:22.939 lat (usec): min=3293, max=15906, avg=11900.97, stdev=1131.05 00:25:22.939 clat percentiles (usec): 00:25:22.939 | 1.00th=[ 6521], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[11469], 00:25:22.939 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:25:22.939 | 70.00th=[12125], 80.00th=[12256], 90.00th=[13042], 95.00th=[13435], 00:25:22.939 | 99.00th=[14353], 99.50th=[15008], 99.90th=[15926], 99.95th=[15926], 00:25:22.939 | 99.99th=[15926] 00:25:22.939 bw ( KiB/s): min=20496, max=21296, per=35.53%, avg=20896.00, stdev=565.69, samples=2 00:25:22.939 iops : min= 5124, max= 5324, avg=5224.00, stdev=141.42, samples=2 00:25:22.939 lat (msec) : 2=0.01%, 4=0.31%, 10=3.35%, 20=96.33% 00:25:22.939 cpu : usr=4.09%, sys=11.96%, ctx=579, majf=0, minf=8 00:25:22.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:22.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:22.939 issued rwts: total=5120,5352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:22.939 job2: (groupid=0, jobs=1): err= 0: pid=92307: Fri Apr 26 21:28:12 2024 00:25:22.939 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:25:22.939 slat (usec): min=6, max=7015, avg=117.53, stdev=554.36 00:25:22.939 clat (usec): min=10837, max=26279, avg=15486.33, stdev=2076.90 00:25:22.939 lat (usec): min=11181, max=26318, avg=15603.86, stdev=2102.74 00:25:22.939 clat percentiles (usec): 00:25:22.939 | 1.00th=[11731], 5.00th=[12387], 10.00th=[13173], 20.00th=[14222], 00:25:22.939 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15139], 60.00th=[15533], 00:25:22.939 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17171], 95.00th=[18744], 00:25:22.939 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26346], 99.95th=[26346], 00:25:22.939 | 99.99th=[26346] 00:25:22.939 write: IOPS=4279, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1002msec); 0 zone resets 00:25:22.939 slat (usec): min=10, max=4433, avg=111.82, stdev=428.58 00:25:22.939 clat (usec): min=1392, max=23439, avg=14717.09, stdev=2102.32 00:25:22.939 lat (usec): min=1425, max=23483, avg=14828.91, stdev=2094.93 00:25:22.939 clat percentiles (usec): 00:25:22.939 | 1.00th=[ 5735], 5.00th=[11600], 10.00th=[11994], 20.00th=[13829], 00:25:22.939 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:25:22.939 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16450], 95.00th=[17171], 00:25:22.939 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20579], 99.95th=[23462], 00:25:22.939 | 99.99th=[23462] 00:25:22.939 bw ( KiB/s): min=16384, max=16904, per=28.30%, avg=16644.00, stdev=367.70, samples=2 00:25:22.939 iops : min= 4096, max= 4226, avg=4161.00, stdev=91.92, samples=2 00:25:22.939 lat (msec) : 2=0.18%, 4=0.01%, 10=0.62%, 20=97.52%, 50=1.67% 00:25:22.939 cpu : usr=4.60%, sys=14.69%, ctx=502, majf=0, minf=9 00:25:22.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:22.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:22.939 issued rwts: total=4096,4288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:22.939 job3: (groupid=0, jobs=1): err= 0: pid=92308: Fri Apr 26 21:28:12 2024 00:25:22.939 read: IOPS=2487, BW=9948KiB/s (10.2MB/s)(9988KiB/1004msec) 00:25:22.939 slat (usec): min=5, max=11086, avg=220.03, stdev=1162.79 00:25:22.939 clat (usec): min=364, max=46198, avg=28131.81, stdev=7311.93 00:25:22.939 lat (usec): min=7371, max=46231, avg=28351.84, stdev=7261.72 00:25:22.939 clat percentiles (usec): 00:25:22.939 | 1.00th=[ 7963], 5.00th=[19006], 10.00th=[20579], 20.00th=[22938], 00:25:22.939 | 30.00th=[23725], 40.00th=[24249], 50.00th=[26084], 60.00th=[28443], 00:25:22.939 | 70.00th=[31065], 80.00th=[34866], 90.00th=[39060], 95.00th=[42206], 00:25:22.939 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:25:22.939 | 99.99th=[46400] 00:25:22.939 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:25:22.939 slat (usec): min=9, max=10737, avg=168.03, stdev=819.78 00:25:22.939 clat (usec): min=12900, max=37579, avg=21775.81, stdev=4607.47 00:25:22.939 lat (usec): min=16421, max=37608, avg=21943.84, stdev=4571.50 00:25:22.939 clat percentiles (usec): 00:25:22.939 | 1.00th=[15664], 5.00th=[16909], 10.00th=[17695], 20.00th=[18744], 00:25:22.939 | 30.00th=[19268], 40.00th=[19792], 50.00th=[20317], 60.00th=[20579], 00:25:22.939 | 70.00th=[22152], 80.00th=[24511], 90.00th=[28967], 95.00th=[32375], 00:25:22.939 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:25:22.939 | 99.99th=[37487] 00:25:22.939 bw ( KiB/s): min= 9208, max=11272, per=17.41%, avg=10240.00, stdev=1459.47, samples=2 00:25:22.939 iops : min= 2302, max= 2818, avg=2560.00, stdev=364.87, samples=2 00:25:22.939 lat (usec) : 500=0.02% 00:25:22.939 lat (msec) : 10=0.63%, 20=25.94%, 50=73.40% 00:25:22.939 cpu : usr=2.59%, sys=8.67%, ctx=160, majf=0, minf=13 00:25:22.939 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:22.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:22.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:22.939 issued rwts: total=2497,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:22.939 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:22.939 00:25:22.939 Run status group 0 (all jobs): 00:25:22.939 READ: bw=53.9MiB/s (56.5MB/s), 8495KiB/s-19.9MiB/s (8698kB/s-20.9MB/s), io=54.1MiB (56.7MB), run=1002-1004msec 00:25:22.939 WRITE: bw=57.4MiB/s (60.2MB/s), 9.96MiB/s-20.8MiB/s (10.4MB/s-21.8MB/s), io=57.7MiB (60.5MB), run=1002-1004msec 00:25:22.939 00:25:22.939 Disk stats (read/write): 00:25:22.939 nvme0n1: ios=2097/2071, merge=0/0, ticks=14147/20701, in_queue=34848, util=88.15% 00:25:22.939 nvme0n2: ios=4354/4608, merge=0/0, ticks=12643/12223, in_queue=24866, util=87.46% 00:25:22.939 nvme0n3: ios=3504/3584, merge=0/0, ticks=17083/15619, in_queue=32702, util=89.20% 00:25:22.939 nvme0n4: ios=2048/2208, merge=0/0, ticks=14260/10749, in_queue=25009, util=89.67% 00:25:22.939 21:28:12 -- target/fio.sh@55 -- # sync 00:25:22.939 21:28:12 -- target/fio.sh@59 -- # fio_pid=92327 00:25:22.940 21:28:12 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:25:22.940 21:28:12 -- target/fio.sh@61 -- # sleep 3 00:25:23.198 [global] 00:25:23.198 thread=1 00:25:23.198 invalidate=1 00:25:23.198 rw=read 00:25:23.198 time_based=1 00:25:23.198 runtime=10 00:25:23.198 ioengine=libaio 00:25:23.198 direct=1 00:25:23.198 bs=4096 00:25:23.198 iodepth=1 00:25:23.198 norandommap=1 00:25:23.198 numjobs=1 00:25:23.198 00:25:23.198 [job0] 00:25:23.198 filename=/dev/nvme0n1 00:25:23.198 [job1] 00:25:23.198 filename=/dev/nvme0n2 00:25:23.198 [job2] 00:25:23.198 filename=/dev/nvme0n3 00:25:23.198 [job3] 00:25:23.198 filename=/dev/nvme0n4 00:25:23.198 Could not set queue depth (nvme0n1) 00:25:23.198 Could not set queue depth (nvme0n2) 00:25:23.198 Could not set queue depth (nvme0n3) 00:25:23.198 Could not set queue depth (nvme0n4) 00:25:23.198 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:23.198 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:23.198 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:23.198 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:23.198 fio-3.35 00:25:23.198 Starting 4 threads 00:25:26.478 21:28:15 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:25:26.478 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=60178432, buflen=4096 00:25:26.478 fio: pid=92370, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:26.478 21:28:15 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:25:26.478 fio: pid=92369, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:26.478 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=37179392, buflen=4096 00:25:26.478 21:28:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:26.478 21:28:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:26.735 fio: pid=92367, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:26.735 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=42459136, buflen=4096 00:25:26.735 21:28:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:26.735 21:28:15 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:26.994 fio: pid=92368, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:25:26.994 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=16257024, buflen=4096 00:25:26.994 00:25:26.994 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92367: Fri Apr 26 21:28:16 2024 00:25:26.994 read: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(40.5MiB/3413msec) 00:25:26.994 slat (usec): min=6, max=9924, avg=23.06, stdev=182.32 00:25:26.994 clat (usec): min=47, max=2613, avg=304.39, stdev=74.32 00:25:26.994 lat (usec): min=111, max=10102, avg=327.45, stdev=196.64 00:25:26.994 clat percentiles (usec): 00:25:26.994 | 1.00th=[ 123], 5.00th=[ 176], 10.00th=[ 202], 20.00th=[ 269], 00:25:26.994 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 330], 00:25:26.994 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 375], 00:25:26.994 | 99.00th=[ 404], 99.50th=[ 498], 99.90th=[ 889], 99.95th=[ 938], 00:25:26.994 | 99.99th=[ 2147] 00:25:26.994 bw ( KiB/s): min=10808, max=12656, per=19.47%, avg=11548.00, stdev=671.37, samples=6 00:25:26.994 iops : min= 2702, max= 3164, avg=2887.00, stdev=167.84, samples=6 00:25:26.994 lat (usec) : 50=0.01%, 250=15.52%, 500=83.97%, 750=0.29%, 1000=0.16% 00:25:26.994 lat (msec) : 2=0.02%, 4=0.02% 00:25:26.994 cpu : usr=0.76%, sys=4.95%, ctx=10373, majf=0, minf=1 00:25:26.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:26.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.994 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.994 issued rwts: total=10367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:26.994 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92368: Fri Apr 26 21:28:16 2024 00:25:26.994 read: IOPS=5538, BW=21.6MiB/s (22.7MB/s)(79.5MiB/3675msec) 00:25:26.994 slat (usec): min=6, max=13823, avg=13.23, stdev=153.58 00:25:26.994 clat (usec): min=102, max=21808, avg=166.40, stdev=158.19 00:25:26.994 lat (usec): min=111, max=21817, avg=179.64, stdev=221.12 00:25:26.994 clat percentiles (usec): 00:25:26.994 | 1.00th=[ 113], 5.00th=[ 123], 10.00th=[ 139], 20.00th=[ 149], 00:25:26.994 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:25:26.994 | 70.00th=[ 172], 80.00th=[ 182], 90.00th=[ 196], 95.00th=[ 208], 00:25:26.994 | 99.00th=[ 253], 99.50th=[ 281], 99.90th=[ 603], 99.95th=[ 685], 00:25:26.994 | 99.99th=[ 2442] 00:25:26.994 bw ( KiB/s): min=20072, max=23504, per=37.21%, avg=22069.29, stdev=1326.57, samples=7 00:25:26.994 iops : min= 5018, max= 5876, avg=5517.29, stdev=331.63, samples=7 00:25:26.994 lat (usec) : 250=98.91%, 500=0.90%, 750=0.15% 00:25:26.994 lat (msec) : 2=0.02%, 4=0.01%, 50=0.01% 00:25:26.994 cpu : usr=0.73%, sys=5.06%, ctx=20372, majf=0, minf=1 00:25:26.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:26.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.994 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.994 issued rwts: total=20354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:26.994 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92369: Fri Apr 26 21:28:16 2024 00:25:26.994 read: IOPS=2847, BW=11.1MiB/s (11.7MB/s)(35.5MiB/3188msec) 00:25:26.994 slat (usec): min=5, max=12723, avg=19.38, stdev=145.45 00:25:26.994 clat (usec): min=143, max=3577, avg=329.68, stdev=78.02 00:25:26.994 lat (usec): min=155, max=12981, avg=349.06, stdev=164.43 00:25:26.994 clat percentiles (usec): 00:25:26.994 | 1.00th=[ 176], 5.00th=[ 251], 10.00th=[ 273], 20.00th=[ 293], 00:25:26.994 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 343], 00:25:26.994 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 388], 00:25:26.994 | 99.00th=[ 478], 99.50th=[ 562], 99.90th=[ 963], 99.95th=[ 1401], 00:25:26.994 | 99.99th=[ 3589] 00:25:26.994 bw ( KiB/s): min=10896, max=11864, per=19.04%, avg=11294.67, stdev=389.44, samples=6 00:25:26.994 iops : min= 2724, max= 2966, avg=2823.67, stdev=97.36, samples=6 00:25:26.994 lat (usec) : 250=4.89%, 500=94.29%, 750=0.51%, 1000=0.20% 00:25:26.994 lat (msec) : 2=0.06%, 4=0.04% 00:25:26.994 cpu : usr=1.10%, sys=4.39%, ctx=9081, majf=0, minf=1 00:25:26.994 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:26.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.994 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.994 issued rwts: total=9078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.994 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:26.994 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92370: Fri Apr 26 21:28:16 2024 00:25:26.994 read: IOPS=5033, BW=19.7MiB/s (20.6MB/s)(57.4MiB/2919msec) 00:25:26.994 slat (usec): min=8, max=129, avg=12.33, stdev= 4.72 00:25:26.994 clat (usec): min=128, max=1190, avg=185.19, stdev=56.18 00:25:26.995 lat (usec): min=141, max=1203, avg=197.51, stdev=58.30 00:25:26.995 clat percentiles (usec): 00:25:26.995 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:25:26.995 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:25:26.995 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 219], 95.00th=[ 334], 00:25:26.995 | 99.00th=[ 408], 99.50th=[ 474], 99.90th=[ 578], 99.95th=[ 627], 00:25:26.995 | 99.99th=[ 1045] 00:25:26.995 bw ( KiB/s): min=19656, max=23064, per=35.58%, avg=21102.40, stdev=1306.08, samples=5 00:25:26.995 iops : min= 4914, max= 5766, avg=5275.60, stdev=326.52, samples=5 00:25:26.995 lat (usec) : 250=92.66%, 500=7.04%, 750=0.25%, 1000=0.03% 00:25:26.995 lat (msec) : 2=0.01% 00:25:26.995 cpu : usr=0.89%, sys=5.21%, ctx=14693, majf=0, minf=2 00:25:26.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:26.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.995 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.995 issued rwts: total=14693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:26.995 00:25:26.995 Run status group 0 (all jobs): 00:25:26.995 READ: bw=57.9MiB/s (60.7MB/s), 11.1MiB/s-21.6MiB/s (11.7MB/s-22.7MB/s), io=213MiB (223MB), run=2919-3675msec 00:25:26.995 00:25:26.995 Disk stats (read/write): 00:25:26.995 nvme0n1: ios=10224/0, merge=0/0, ticks=3205/0, in_queue=3205, util=95.74% 00:25:26.995 nvme0n2: ios=20034/0, merge=0/0, ticks=3398/0, in_queue=3398, util=95.80% 00:25:26.995 nvme0n3: ios=8893/0, merge=0/0, ticks=2936/0, in_queue=2936, util=96.35% 00:25:26.995 nvme0n4: ios=14599/0, merge=0/0, ticks=2741/0, in_queue=2741, util=96.74% 00:25:26.995 21:28:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:26.995 21:28:16 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:25:27.253 21:28:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:27.253 21:28:16 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:25:27.819 21:28:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:27.819 21:28:16 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:25:28.078 21:28:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:28.078 21:28:17 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:25:28.078 21:28:17 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:28.078 21:28:17 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:25:28.336 21:28:17 -- target/fio.sh@69 -- # fio_status=0 00:25:28.336 21:28:17 -- target/fio.sh@70 -- # wait 92327 00:25:28.336 21:28:17 -- target/fio.sh@70 -- # fio_status=4 00:25:28.336 21:28:17 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:28.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:28.595 21:28:17 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:28.595 21:28:17 -- common/autotest_common.sh@1205 -- # local i=0 00:25:28.595 21:28:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:25:28.595 21:28:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:28.595 21:28:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:28.595 21:28:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:25:28.595 nvmf hotplug test: fio failed as expected 00:25:28.595 21:28:17 -- common/autotest_common.sh@1217 -- # return 0 00:25:28.595 21:28:17 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:25:28.595 21:28:17 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:25:28.595 21:28:17 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:28.853 21:28:17 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:25:28.853 21:28:17 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:25:28.853 21:28:17 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:25:28.853 21:28:17 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:25:28.853 21:28:17 -- target/fio.sh@91 -- # nvmftestfini 00:25:28.853 21:28:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:28.853 21:28:17 -- nvmf/common.sh@117 -- # sync 00:25:28.853 21:28:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.853 21:28:17 -- nvmf/common.sh@120 -- # set +e 00:25:28.853 21:28:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.853 21:28:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.853 rmmod nvme_tcp 00:25:28.853 rmmod nvme_fabrics 00:25:28.853 rmmod nvme_keyring 00:25:28.853 21:28:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.853 21:28:17 -- nvmf/common.sh@124 -- # set -e 00:25:28.853 21:28:17 -- nvmf/common.sh@125 -- # return 0 00:25:28.853 21:28:17 -- nvmf/common.sh@478 -- # '[' -n 91826 ']' 00:25:28.853 21:28:17 -- nvmf/common.sh@479 -- # killprocess 91826 00:25:28.853 21:28:17 -- common/autotest_common.sh@936 -- # '[' -z 91826 ']' 00:25:28.853 21:28:17 -- common/autotest_common.sh@940 -- # kill -0 91826 00:25:28.853 21:28:17 -- common/autotest_common.sh@941 -- # uname 00:25:28.853 21:28:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:28.853 21:28:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91826 00:25:28.853 killing process with pid 91826 00:25:28.853 21:28:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:28.853 21:28:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:28.853 21:28:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91826' 00:25:28.853 21:28:17 -- common/autotest_common.sh@955 -- # kill 91826 00:25:28.854 21:28:17 -- common/autotest_common.sh@960 -- # wait 91826 00:25:29.112 21:28:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:29.112 21:28:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:29.112 21:28:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:29.112 21:28:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:29.112 21:28:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:29.112 21:28:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.112 21:28:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.112 21:28:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.112 21:28:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:29.112 00:25:29.112 real 0m20.185s 00:25:29.112 user 1m19.838s 00:25:29.112 sys 0m7.599s 00:25:29.112 21:28:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:29.112 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:25:29.112 ************************************ 00:25:29.112 END TEST nvmf_fio_target 00:25:29.112 ************************************ 00:25:29.112 21:28:18 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:29.112 21:28:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:29.112 21:28:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:29.112 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:25:29.112 ************************************ 00:25:29.112 START TEST nvmf_bdevio 00:25:29.112 ************************************ 00:25:29.112 21:28:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:25:29.371 * Looking for test storage... 00:25:29.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:29.371 21:28:18 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:29.371 21:28:18 -- nvmf/common.sh@7 -- # uname -s 00:25:29.371 21:28:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.371 21:28:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.371 21:28:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.371 21:28:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.371 21:28:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.371 21:28:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.371 21:28:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.371 21:28:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.371 21:28:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.371 21:28:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.371 21:28:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:25:29.371 21:28:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:25:29.371 21:28:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.371 21:28:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.371 21:28:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:29.371 21:28:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.371 21:28:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:29.371 21:28:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.371 21:28:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.371 21:28:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.371 21:28:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.371 21:28:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.371 21:28:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.371 21:28:18 -- paths/export.sh@5 -- # export PATH 00:25:29.371 21:28:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.371 21:28:18 -- nvmf/common.sh@47 -- # : 0 00:25:29.371 21:28:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:29.371 21:28:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:29.371 21:28:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.371 21:28:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.371 21:28:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.371 21:28:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:29.371 21:28:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:29.371 21:28:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:29.371 21:28:18 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:29.371 21:28:18 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:29.371 21:28:18 -- target/bdevio.sh@14 -- # nvmftestinit 00:25:29.371 21:28:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:29.371 21:28:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.371 21:28:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:29.371 21:28:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:29.371 21:28:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:29.371 21:28:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.371 21:28:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:29.371 21:28:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.371 21:28:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:29.371 21:28:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:29.371 21:28:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:29.371 21:28:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:29.371 21:28:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:29.371 21:28:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:29.371 21:28:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.371 21:28:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.371 21:28:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:29.371 21:28:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:29.371 21:28:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:29.371 21:28:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:29.371 21:28:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:29.371 21:28:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.371 21:28:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:29.372 21:28:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:29.372 21:28:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:29.372 21:28:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:29.372 21:28:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:29.372 21:28:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:29.372 Cannot find device "nvmf_tgt_br" 00:25:29.372 21:28:18 -- nvmf/common.sh@155 -- # true 00:25:29.372 21:28:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:29.372 Cannot find device "nvmf_tgt_br2" 00:25:29.372 21:28:18 -- nvmf/common.sh@156 -- # true 00:25:29.372 21:28:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:29.372 21:28:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:29.372 Cannot find device "nvmf_tgt_br" 00:25:29.629 21:28:18 -- nvmf/common.sh@158 -- # true 00:25:29.629 21:28:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:29.629 Cannot find device "nvmf_tgt_br2" 00:25:29.629 21:28:18 -- nvmf/common.sh@159 -- # true 00:25:29.629 21:28:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:29.629 21:28:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:29.629 21:28:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:29.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.629 21:28:18 -- nvmf/common.sh@162 -- # true 00:25:29.629 21:28:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:29.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:29.629 21:28:18 -- nvmf/common.sh@163 -- # true 00:25:29.629 21:28:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:29.629 21:28:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:29.629 21:28:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:29.629 21:28:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:29.629 21:28:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:29.629 21:28:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:29.629 21:28:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:29.629 21:28:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:29.629 21:28:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:29.629 21:28:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:29.629 21:28:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:29.629 21:28:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:29.629 21:28:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:29.629 21:28:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:29.629 21:28:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:29.629 21:28:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:29.629 21:28:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:29.629 21:28:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:29.629 21:28:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:29.629 21:28:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:29.629 21:28:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:29.629 21:28:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:29.629 21:28:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.629 21:28:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:29.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:25:29.629 00:25:29.629 --- 10.0.0.2 ping statistics --- 00:25:29.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.629 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:25:29.630 21:28:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:29.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:29.630 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:25:29.630 00:25:29.630 --- 10.0.0.3 ping statistics --- 00:25:29.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.630 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:25:29.630 21:28:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:29.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:25:29.630 00:25:29.630 --- 10.0.0.1 ping statistics --- 00:25:29.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.630 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:29.630 21:28:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.630 21:28:18 -- nvmf/common.sh@422 -- # return 0 00:25:29.630 21:28:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:29.630 21:28:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.630 21:28:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:29.630 21:28:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:29.630 21:28:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.630 21:28:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:29.630 21:28:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:29.630 21:28:18 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:29.630 21:28:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:29.630 21:28:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:29.630 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:25:29.888 21:28:18 -- nvmf/common.sh@470 -- # nvmfpid=92696 00:25:29.888 21:28:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:25:29.888 21:28:18 -- nvmf/common.sh@471 -- # waitforlisten 92696 00:25:29.888 21:28:18 -- common/autotest_common.sh@817 -- # '[' -z 92696 ']' 00:25:29.888 21:28:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.888 21:28:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:29.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.888 21:28:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.888 21:28:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:29.888 21:28:18 -- common/autotest_common.sh@10 -- # set +x 00:25:29.888 [2024-04-26 21:28:18.937646] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:29.888 [2024-04-26 21:28:18.937714] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.888 [2024-04-26 21:28:19.079234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.888 [2024-04-26 21:28:19.132329] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.888 [2024-04-26 21:28:19.132393] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.888 [2024-04-26 21:28:19.132401] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.888 [2024-04-26 21:28:19.132406] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.888 [2024-04-26 21:28:19.132410] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.888 [2024-04-26 21:28:19.132572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:29.888 [2024-04-26 21:28:19.132638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:29.888 [2024-04-26 21:28:19.133671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.888 [2024-04-26 21:28:19.133672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:30.881 21:28:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:30.881 21:28:19 -- common/autotest_common.sh@850 -- # return 0 00:25:30.881 21:28:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:30.881 21:28:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:30.881 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:25:30.881 21:28:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.881 21:28:19 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:30.881 21:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.881 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:25:30.881 [2024-04-26 21:28:19.875504] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.881 21:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.881 21:28:19 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:30.881 21:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.881 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:25:30.881 Malloc0 00:25:30.881 21:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.881 21:28:19 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:30.881 21:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.881 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:25:30.881 21:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.881 21:28:19 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.881 21:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.881 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:25:30.881 21:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.881 21:28:19 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.881 21:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:30.881 21:28:19 -- common/autotest_common.sh@10 -- # set +x 00:25:30.881 [2024-04-26 21:28:19.943560] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.881 21:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:30.881 21:28:19 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:30.881 21:28:19 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:25:30.881 21:28:19 -- nvmf/common.sh@521 -- # config=() 00:25:30.881 21:28:19 -- nvmf/common.sh@521 -- # local subsystem config 00:25:30.881 21:28:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:30.881 21:28:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:30.881 { 00:25:30.881 "params": { 00:25:30.881 "name": "Nvme$subsystem", 00:25:30.881 "trtype": "$TEST_TRANSPORT", 00:25:30.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.881 "adrfam": "ipv4", 00:25:30.881 "trsvcid": "$NVMF_PORT", 00:25:30.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.881 "hdgst": ${hdgst:-false}, 00:25:30.881 "ddgst": ${ddgst:-false} 00:25:30.881 }, 00:25:30.881 "method": "bdev_nvme_attach_controller" 00:25:30.881 } 00:25:30.881 EOF 00:25:30.881 )") 00:25:30.881 21:28:19 -- nvmf/common.sh@543 -- # cat 00:25:30.881 21:28:19 -- nvmf/common.sh@545 -- # jq . 00:25:30.881 21:28:19 -- nvmf/common.sh@546 -- # IFS=, 00:25:30.881 21:28:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:30.881 "params": { 00:25:30.881 "name": "Nvme1", 00:25:30.881 "trtype": "tcp", 00:25:30.881 "traddr": "10.0.0.2", 00:25:30.881 "adrfam": "ipv4", 00:25:30.881 "trsvcid": "4420", 00:25:30.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.881 "hdgst": false, 00:25:30.881 "ddgst": false 00:25:30.881 }, 00:25:30.881 "method": "bdev_nvme_attach_controller" 00:25:30.881 }' 00:25:30.881 [2024-04-26 21:28:20.003139] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:30.881 [2024-04-26 21:28:20.003207] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92752 ] 00:25:31.139 [2024-04-26 21:28:20.142805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:31.139 [2024-04-26 21:28:20.199625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.139 [2024-04-26 21:28:20.199690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.139 [2024-04-26 21:28:20.199694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.139 I/O targets: 00:25:31.139 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:31.139 00:25:31.139 00:25:31.139 CUnit - A unit testing framework for C - Version 2.1-3 00:25:31.139 http://cunit.sourceforge.net/ 00:25:31.139 00:25:31.139 00:25:31.139 Suite: bdevio tests on: Nvme1n1 00:25:31.396 Test: blockdev write read block ...passed 00:25:31.396 Test: blockdev write zeroes read block ...passed 00:25:31.396 Test: blockdev write zeroes read no split ...passed 00:25:31.396 Test: blockdev write zeroes read split ...passed 00:25:31.396 Test: blockdev write zeroes read split partial ...passed 00:25:31.396 Test: blockdev reset ...[2024-04-26 21:28:20.483268] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.396 [2024-04-26 21:28:20.483408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a059f0 (9): Bad file descriptor 00:25:31.396 [2024-04-26 21:28:20.500892] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:31.396 passed 00:25:31.396 Test: blockdev write read 8 blocks ...passed 00:25:31.396 Test: blockdev write read size > 128k ...passed 00:25:31.396 Test: blockdev write read invalid size ...passed 00:25:31.396 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:31.396 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:31.396 Test: blockdev write read max offset ...passed 00:25:31.396 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:31.397 Test: blockdev writev readv 8 blocks ...passed 00:25:31.397 Test: blockdev writev readv 30 x 1block ...passed 00:25:31.654 Test: blockdev writev readv block ...passed 00:25:31.654 Test: blockdev writev readv size > 128k ...passed 00:25:31.654 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:31.654 Test: blockdev comparev and writev ...[2024-04-26 21:28:20.671000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:31.654 [2024-04-26 21:28:20.671055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.654 [2024-04-26 21:28:20.671071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:31.654 [2024-04-26 21:28:20.671078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:31.654 [2024-04-26 21:28:20.671353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:31.654 [2024-04-26 21:28:20.671364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:31.654 [2024-04-26 21:28:20.671378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:31.654 [2024-04-26 21:28:20.671385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:31.654 [2024-04-26 21:28:20.671645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:31.654 [2024-04-26 21:28:20.671655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:31.654 [2024-04-26 21:28:20.671667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:31.654 [2024-04-26 21:28:20.671674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:31.654 [2024-04-26 21:28:20.671921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:31.654 [2024-04-26 21:28:20.671930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:31.654 [2024-04-26 21:28:20.671942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:31.654 [2024-04-26 21:28:20.671948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:31.654 passed 00:25:31.654 Test: blockdev nvme passthru rw ...passed 00:25:31.654 Test: blockdev nvme passthru vendor specific ...[2024-04-26 21:28:20.755660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:31.654 [2024-04-26 21:28:20.755703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:31.654 [2024-04-26 21:28:20.755809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:31.654 [2024-04-26 21:28:20.755818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:31.654 [2024-04-26 21:28:20.755912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:31.654 [2024-04-26 21:28:20.755922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:31.654 passed 00:25:31.654 Test: blockdev nvme admin passthru ...[2024-04-26 21:28:20.756020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:31.654 [2024-04-26 21:28:20.756033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:31.654 passed 00:25:31.654 Test: blockdev copy ...passed 00:25:31.654 00:25:31.654 Run Summary: Type Total Ran Passed Failed Inactive 00:25:31.654 suites 1 1 n/a 0 0 00:25:31.654 tests 23 23 23 0 0 00:25:31.654 asserts 152 152 152 0 n/a 00:25:31.654 00:25:31.654 Elapsed time = 0.914 seconds 00:25:31.912 21:28:20 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.912 21:28:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.912 21:28:20 -- common/autotest_common.sh@10 -- # set +x 00:25:31.912 21:28:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.912 21:28:20 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:31.912 21:28:21 -- target/bdevio.sh@30 -- # nvmftestfini 00:25:31.912 21:28:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:31.912 21:28:21 -- nvmf/common.sh@117 -- # sync 00:25:31.912 21:28:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.912 21:28:21 -- nvmf/common.sh@120 -- # set +e 00:25:31.912 21:28:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.912 21:28:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.912 rmmod nvme_tcp 00:25:31.912 rmmod nvme_fabrics 00:25:31.912 rmmod nvme_keyring 00:25:31.912 21:28:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.912 21:28:21 -- nvmf/common.sh@124 -- # set -e 00:25:31.912 21:28:21 -- nvmf/common.sh@125 -- # return 0 00:25:31.912 21:28:21 -- nvmf/common.sh@478 -- # '[' -n 92696 ']' 00:25:31.912 21:28:21 -- nvmf/common.sh@479 -- # killprocess 92696 00:25:31.912 21:28:21 -- common/autotest_common.sh@936 -- # '[' -z 92696 ']' 00:25:31.912 21:28:21 -- common/autotest_common.sh@940 -- # kill -0 92696 00:25:31.912 21:28:21 -- common/autotest_common.sh@941 -- # uname 00:25:31.912 21:28:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:31.912 21:28:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92696 00:25:31.912 21:28:21 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:25:31.912 killing process with pid 92696 00:25:31.912 21:28:21 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:25:31.912 21:28:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92696' 00:25:31.912 21:28:21 -- common/autotest_common.sh@955 -- # kill 92696 00:25:31.912 21:28:21 -- common/autotest_common.sh@960 -- # wait 92696 00:25:32.170 21:28:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:32.170 21:28:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:32.170 21:28:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:32.170 21:28:21 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.170 21:28:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:32.170 21:28:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.170 21:28:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.170 21:28:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.170 21:28:21 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:32.428 00:25:32.428 real 0m3.062s 00:25:32.428 user 0m10.849s 00:25:32.428 sys 0m0.775s 00:25:32.428 21:28:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:32.428 21:28:21 -- common/autotest_common.sh@10 -- # set +x 00:25:32.428 ************************************ 00:25:32.428 END TEST nvmf_bdevio 00:25:32.428 ************************************ 00:25:32.428 21:28:21 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:25:32.428 21:28:21 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:32.428 21:28:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:32.428 21:28:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:32.428 21:28:21 -- common/autotest_common.sh@10 -- # set +x 00:25:32.428 ************************************ 00:25:32.428 START TEST nvmf_bdevio_no_huge 00:25:32.428 ************************************ 00:25:32.428 21:28:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:32.428 * Looking for test storage... 00:25:32.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:32.428 21:28:21 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:32.428 21:28:21 -- nvmf/common.sh@7 -- # uname -s 00:25:32.428 21:28:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.428 21:28:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.428 21:28:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.428 21:28:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.428 21:28:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.428 21:28:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.428 21:28:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.428 21:28:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.428 21:28:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.686 21:28:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.686 21:28:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:25:32.686 21:28:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:25:32.686 21:28:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.686 21:28:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.686 21:28:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:32.686 21:28:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.686 21:28:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:32.686 21:28:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.686 21:28:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.686 21:28:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.686 21:28:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.687 21:28:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.687 21:28:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.687 21:28:21 -- paths/export.sh@5 -- # export PATH 00:25:32.687 21:28:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.687 21:28:21 -- nvmf/common.sh@47 -- # : 0 00:25:32.687 21:28:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:32.687 21:28:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:32.687 21:28:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.687 21:28:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.687 21:28:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.687 21:28:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:32.687 21:28:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:32.687 21:28:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:32.687 21:28:21 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:32.687 21:28:21 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:32.687 21:28:21 -- target/bdevio.sh@14 -- # nvmftestinit 00:25:32.687 21:28:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:32.687 21:28:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.687 21:28:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:32.687 21:28:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:32.687 21:28:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:32.687 21:28:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.687 21:28:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.687 21:28:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.687 21:28:21 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:32.687 21:28:21 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:32.687 21:28:21 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:32.687 21:28:21 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:32.687 21:28:21 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:32.687 21:28:21 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:32.687 21:28:21 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:32.687 21:28:21 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:32.687 21:28:21 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:32.687 21:28:21 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:32.687 21:28:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:32.687 21:28:21 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:32.687 21:28:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:32.687 21:28:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:32.687 21:28:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:32.687 21:28:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:32.687 21:28:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:32.687 21:28:21 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:32.687 21:28:21 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:32.687 21:28:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:32.687 Cannot find device "nvmf_tgt_br" 00:25:32.687 21:28:21 -- nvmf/common.sh@155 -- # true 00:25:32.687 21:28:21 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:32.687 Cannot find device "nvmf_tgt_br2" 00:25:32.687 21:28:21 -- nvmf/common.sh@156 -- # true 00:25:32.687 21:28:21 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:32.687 21:28:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:32.687 Cannot find device "nvmf_tgt_br" 00:25:32.687 21:28:21 -- nvmf/common.sh@158 -- # true 00:25:32.687 21:28:21 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:32.687 Cannot find device "nvmf_tgt_br2" 00:25:32.687 21:28:21 -- nvmf/common.sh@159 -- # true 00:25:32.687 21:28:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:32.687 21:28:21 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:32.687 21:28:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:32.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:32.687 21:28:21 -- nvmf/common.sh@162 -- # true 00:25:32.687 21:28:21 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:32.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:32.687 21:28:21 -- nvmf/common.sh@163 -- # true 00:25:32.687 21:28:21 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:32.687 21:28:21 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:32.687 21:28:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:32.687 21:28:21 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:32.687 21:28:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:32.945 21:28:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:32.945 21:28:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:32.945 21:28:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:32.945 21:28:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:32.945 21:28:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:32.945 21:28:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:32.945 21:28:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:32.945 21:28:21 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:32.945 21:28:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:32.945 21:28:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:32.945 21:28:21 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:32.945 21:28:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:32.945 21:28:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:32.945 21:28:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:32.945 21:28:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:32.945 21:28:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:32.945 21:28:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:32.945 21:28:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:32.945 21:28:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:32.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:25:32.945 00:25:32.945 --- 10.0.0.2 ping statistics --- 00:25:32.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.945 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:25:32.945 21:28:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:32.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:32.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:25:32.945 00:25:32.945 --- 10.0.0.3 ping statistics --- 00:25:32.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.945 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:25:32.945 21:28:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:32.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:25:32.945 00:25:32.945 --- 10.0.0.1 ping statistics --- 00:25:32.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.945 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:32.945 21:28:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.945 21:28:22 -- nvmf/common.sh@422 -- # return 0 00:25:32.945 21:28:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:32.945 21:28:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.945 21:28:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:32.945 21:28:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:32.945 21:28:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.945 21:28:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:32.945 21:28:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:32.945 21:28:22 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:32.945 21:28:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:32.945 21:28:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:32.945 21:28:22 -- common/autotest_common.sh@10 -- # set +x 00:25:32.945 21:28:22 -- nvmf/common.sh@470 -- # nvmfpid=92937 00:25:32.945 21:28:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:32.945 21:28:22 -- nvmf/common.sh@471 -- # waitforlisten 92937 00:25:32.945 21:28:22 -- common/autotest_common.sh@817 -- # '[' -z 92937 ']' 00:25:32.945 21:28:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.945 21:28:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:32.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.946 21:28:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.946 21:28:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:32.946 21:28:22 -- common/autotest_common.sh@10 -- # set +x 00:25:32.946 [2024-04-26 21:28:22.152922] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:32.946 [2024-04-26 21:28:22.152996] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:33.203 [2024-04-26 21:28:22.283999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.203 [2024-04-26 21:28:22.367097] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.203 [2024-04-26 21:28:22.367148] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.203 [2024-04-26 21:28:22.367155] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.203 [2024-04-26 21:28:22.367160] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.203 [2024-04-26 21:28:22.367165] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.203 [2024-04-26 21:28:22.367893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:33.203 [2024-04-26 21:28:22.367961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:33.203 [2024-04-26 21:28:22.368158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.203 [2024-04-26 21:28:22.368161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:34.182 21:28:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:34.182 21:28:23 -- common/autotest_common.sh@850 -- # return 0 00:25:34.182 21:28:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:34.182 21:28:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:34.182 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:25:34.182 21:28:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.182 21:28:23 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:34.182 21:28:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.182 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:25:34.182 [2024-04-26 21:28:23.139106] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.183 21:28:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.183 21:28:23 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:34.183 21:28:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.183 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:25:34.183 Malloc0 00:25:34.183 21:28:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.183 21:28:23 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.183 21:28:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.183 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:25:34.183 21:28:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.183 21:28:23 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.183 21:28:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.183 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:25:34.183 21:28:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.183 21:28:23 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.183 21:28:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.183 21:28:23 -- common/autotest_common.sh@10 -- # set +x 00:25:34.183 [2024-04-26 21:28:23.196651] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.183 21:28:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.183 21:28:23 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:34.183 21:28:23 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:34.183 21:28:23 -- nvmf/common.sh@521 -- # config=() 00:25:34.183 21:28:23 -- nvmf/common.sh@521 -- # local subsystem config 00:25:34.183 21:28:23 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:34.183 21:28:23 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:34.183 { 00:25:34.183 "params": { 00:25:34.183 "name": "Nvme$subsystem", 00:25:34.183 "trtype": "$TEST_TRANSPORT", 00:25:34.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.183 "adrfam": "ipv4", 00:25:34.183 "trsvcid": "$NVMF_PORT", 00:25:34.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.183 "hdgst": ${hdgst:-false}, 00:25:34.183 "ddgst": ${ddgst:-false} 00:25:34.183 }, 00:25:34.183 "method": "bdev_nvme_attach_controller" 00:25:34.183 } 00:25:34.183 EOF 00:25:34.183 )") 00:25:34.183 21:28:23 -- nvmf/common.sh@543 -- # cat 00:25:34.183 21:28:23 -- nvmf/common.sh@545 -- # jq . 00:25:34.183 21:28:23 -- nvmf/common.sh@546 -- # IFS=, 00:25:34.183 21:28:23 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:34.183 "params": { 00:25:34.183 "name": "Nvme1", 00:25:34.183 "trtype": "tcp", 00:25:34.183 "traddr": "10.0.0.2", 00:25:34.183 "adrfam": "ipv4", 00:25:34.183 "trsvcid": "4420", 00:25:34.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:34.183 "hdgst": false, 00:25:34.183 "ddgst": false 00:25:34.183 }, 00:25:34.183 "method": "bdev_nvme_attach_controller" 00:25:34.183 }' 00:25:34.183 [2024-04-26 21:28:23.255600] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:34.183 [2024-04-26 21:28:23.255674] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid92993 ] 00:25:34.183 [2024-04-26 21:28:23.390235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:34.441 [2024-04-26 21:28:23.501118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.441 [2024-04-26 21:28:23.501163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.441 [2024-04-26 21:28:23.501167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.441 I/O targets: 00:25:34.441 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:34.441 00:25:34.441 00:25:34.442 CUnit - A unit testing framework for C - Version 2.1-3 00:25:34.442 http://cunit.sourceforge.net/ 00:25:34.442 00:25:34.442 00:25:34.442 Suite: bdevio tests on: Nvme1n1 00:25:34.699 Test: blockdev write read block ...passed 00:25:34.699 Test: blockdev write zeroes read block ...passed 00:25:34.699 Test: blockdev write zeroes read no split ...passed 00:25:34.699 Test: blockdev write zeroes read split ...passed 00:25:34.699 Test: blockdev write zeroes read split partial ...passed 00:25:34.699 Test: blockdev reset ...[2024-04-26 21:28:23.776937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.699 [2024-04-26 21:28:23.777044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120b220 (9): Bad file descriptor 00:25:34.699 [2024-04-26 21:28:23.792249] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:34.699 passed 00:25:34.699 Test: blockdev write read 8 blocks ...passed 00:25:34.699 Test: blockdev write read size > 128k ...passed 00:25:34.699 Test: blockdev write read invalid size ...passed 00:25:34.699 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:34.699 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:34.699 Test: blockdev write read max offset ...passed 00:25:34.699 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:34.699 Test: blockdev writev readv 8 blocks ...passed 00:25:34.699 Test: blockdev writev readv 30 x 1block ...passed 00:25:34.957 Test: blockdev writev readv block ...passed 00:25:34.957 Test: blockdev writev readv size > 128k ...passed 00:25:34.957 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:34.957 Test: blockdev comparev and writev ...[2024-04-26 21:28:23.964468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:34.957 [2024-04-26 21:28:23.964526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:34.957 [2024-04-26 21:28:23.964541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:34.957 [2024-04-26 21:28:23.964549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.957 [2024-04-26 21:28:23.965011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:34.957 [2024-04-26 21:28:23.965031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:34.957 [2024-04-26 21:28:23.965044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:34.957 [2024-04-26 21:28:23.965051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:34.957 [2024-04-26 21:28:23.965406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:34.957 [2024-04-26 21:28:23.965426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:34.957 [2024-04-26 21:28:23.965439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:34.957 [2024-04-26 21:28:23.965446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:34.957 [2024-04-26 21:28:23.965760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:34.958 [2024-04-26 21:28:23.965790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:34.958 [2024-04-26 21:28:23.965804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:34.958 [2024-04-26 21:28:23.965811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:34.958 passed 00:25:34.958 Test: blockdev nvme passthru rw ...passed 00:25:34.958 Test: blockdev nvme passthru vendor specific ...[2024-04-26 21:28:24.047782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:34.958 [2024-04-26 21:28:24.047821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:34.958 [2024-04-26 21:28:24.048151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:34.958 [2024-04-26 21:28:24.048168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:34.958 [2024-04-26 21:28:24.048266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:34.958 [2024-04-26 21:28:24.048275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:34.958 [2024-04-26 21:28:24.048378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:34.958 [2024-04-26 21:28:24.048388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:34.958 passed 00:25:34.958 Test: blockdev nvme admin passthru ...passed 00:25:34.958 Test: blockdev copy ...passed 00:25:34.958 00:25:34.958 Run Summary: Type Total Ran Passed Failed Inactive 00:25:34.958 suites 1 1 n/a 0 0 00:25:34.958 tests 23 23 23 0 0 00:25:34.958 asserts 152 152 152 0 n/a 00:25:34.958 00:25:34.958 Elapsed time = 0.941 seconds 00:25:35.216 21:28:24 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.216 21:28:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.216 21:28:24 -- common/autotest_common.sh@10 -- # set +x 00:25:35.216 21:28:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.216 21:28:24 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:35.216 21:28:24 -- target/bdevio.sh@30 -- # nvmftestfini 00:25:35.216 21:28:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:35.216 21:28:24 -- nvmf/common.sh@117 -- # sync 00:25:35.216 21:28:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.216 21:28:24 -- nvmf/common.sh@120 -- # set +e 00:25:35.216 21:28:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.216 21:28:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.216 rmmod nvme_tcp 00:25:35.475 rmmod nvme_fabrics 00:25:35.475 rmmod nvme_keyring 00:25:35.475 21:28:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.475 21:28:24 -- nvmf/common.sh@124 -- # set -e 00:25:35.475 21:28:24 -- nvmf/common.sh@125 -- # return 0 00:25:35.475 21:28:24 -- nvmf/common.sh@478 -- # '[' -n 92937 ']' 00:25:35.475 21:28:24 -- nvmf/common.sh@479 -- # killprocess 92937 00:25:35.475 21:28:24 -- common/autotest_common.sh@936 -- # '[' -z 92937 ']' 00:25:35.475 21:28:24 -- common/autotest_common.sh@940 -- # kill -0 92937 00:25:35.475 21:28:24 -- common/autotest_common.sh@941 -- # uname 00:25:35.475 21:28:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:35.475 21:28:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92937 00:25:35.475 21:28:24 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:25:35.475 21:28:24 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:25:35.475 killing process with pid 92937 00:25:35.475 21:28:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92937' 00:25:35.475 21:28:24 -- common/autotest_common.sh@955 -- # kill 92937 00:25:35.475 21:28:24 -- common/autotest_common.sh@960 -- # wait 92937 00:25:35.733 21:28:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:35.733 21:28:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:35.733 21:28:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:35.733 21:28:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.733 21:28:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.733 21:28:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.733 21:28:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.733 21:28:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.733 21:28:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:35.733 00:25:35.733 real 0m3.389s 00:25:35.733 user 0m11.755s 00:25:35.733 sys 0m1.264s 00:25:35.733 21:28:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:35.733 21:28:24 -- common/autotest_common.sh@10 -- # set +x 00:25:35.733 ************************************ 00:25:35.733 END TEST nvmf_bdevio_no_huge 00:25:35.733 ************************************ 00:25:35.994 21:28:24 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:35.994 21:28:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:35.994 21:28:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:35.994 21:28:24 -- common/autotest_common.sh@10 -- # set +x 00:25:35.994 ************************************ 00:25:35.994 START TEST nvmf_tls 00:25:35.994 ************************************ 00:25:35.994 21:28:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:35.994 * Looking for test storage... 00:25:35.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:35.994 21:28:25 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:35.994 21:28:25 -- nvmf/common.sh@7 -- # uname -s 00:25:35.994 21:28:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.994 21:28:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.994 21:28:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.994 21:28:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.994 21:28:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.994 21:28:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.994 21:28:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.994 21:28:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.994 21:28:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.994 21:28:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.994 21:28:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:25:35.994 21:28:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:25:35.994 21:28:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.994 21:28:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.994 21:28:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:35.994 21:28:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.994 21:28:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:35.994 21:28:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.994 21:28:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.994 21:28:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.994 21:28:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.994 21:28:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.994 21:28:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.994 21:28:25 -- paths/export.sh@5 -- # export PATH 00:25:35.994 21:28:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.994 21:28:25 -- nvmf/common.sh@47 -- # : 0 00:25:35.994 21:28:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:35.994 21:28:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:35.994 21:28:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.994 21:28:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.994 21:28:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.994 21:28:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:35.994 21:28:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:35.994 21:28:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:35.994 21:28:25 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:35.994 21:28:25 -- target/tls.sh@62 -- # nvmftestinit 00:25:35.994 21:28:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:35.994 21:28:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.994 21:28:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:35.994 21:28:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:35.994 21:28:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:35.994 21:28:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.994 21:28:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.994 21:28:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.994 21:28:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:35.994 21:28:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:35.994 21:28:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:35.994 21:28:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:35.994 21:28:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:35.994 21:28:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:35.994 21:28:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.994 21:28:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.994 21:28:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:35.994 21:28:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:35.994 21:28:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:35.994 21:28:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:35.994 21:28:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:35.994 21:28:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.994 21:28:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:35.994 21:28:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:35.994 21:28:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:35.994 21:28:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:35.994 21:28:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:35.994 21:28:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:36.255 Cannot find device "nvmf_tgt_br" 00:25:36.255 21:28:25 -- nvmf/common.sh@155 -- # true 00:25:36.255 21:28:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:36.255 Cannot find device "nvmf_tgt_br2" 00:25:36.255 21:28:25 -- nvmf/common.sh@156 -- # true 00:25:36.255 21:28:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:36.255 21:28:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:36.255 Cannot find device "nvmf_tgt_br" 00:25:36.255 21:28:25 -- nvmf/common.sh@158 -- # true 00:25:36.255 21:28:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:36.255 Cannot find device "nvmf_tgt_br2" 00:25:36.255 21:28:25 -- nvmf/common.sh@159 -- # true 00:25:36.255 21:28:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:36.255 21:28:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:36.255 21:28:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:36.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.255 21:28:25 -- nvmf/common.sh@162 -- # true 00:25:36.255 21:28:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:36.255 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:36.255 21:28:25 -- nvmf/common.sh@163 -- # true 00:25:36.255 21:28:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:36.255 21:28:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:36.255 21:28:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:36.255 21:28:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:36.255 21:28:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:36.255 21:28:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:36.255 21:28:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:36.256 21:28:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:36.256 21:28:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:36.256 21:28:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:36.256 21:28:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:36.256 21:28:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:36.256 21:28:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:36.256 21:28:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:36.256 21:28:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:36.256 21:28:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:36.256 21:28:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:36.256 21:28:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:36.256 21:28:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:36.516 21:28:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:36.516 21:28:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:36.516 21:28:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:36.516 21:28:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:36.516 21:28:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:36.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:25:36.516 00:25:36.516 --- 10.0.0.2 ping statistics --- 00:25:36.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.516 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:36.516 21:28:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:36.516 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:36.516 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:25:36.516 00:25:36.516 --- 10.0.0.3 ping statistics --- 00:25:36.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.516 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:36.516 21:28:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:36.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:25:36.516 00:25:36.516 --- 10.0.0.1 ping statistics --- 00:25:36.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.516 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:25:36.516 21:28:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.516 21:28:25 -- nvmf/common.sh@422 -- # return 0 00:25:36.516 21:28:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:36.516 21:28:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.516 21:28:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:36.516 21:28:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:36.516 21:28:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.516 21:28:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:36.516 21:28:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:36.516 21:28:25 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:36.516 21:28:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:36.516 21:28:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:36.516 21:28:25 -- common/autotest_common.sh@10 -- # set +x 00:25:36.516 21:28:25 -- nvmf/common.sh@470 -- # nvmfpid=93195 00:25:36.516 21:28:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:36.516 21:28:25 -- nvmf/common.sh@471 -- # waitforlisten 93195 00:25:36.516 21:28:25 -- common/autotest_common.sh@817 -- # '[' -z 93195 ']' 00:25:36.516 21:28:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.516 21:28:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:36.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.516 21:28:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.516 21:28:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:36.516 21:28:25 -- common/autotest_common.sh@10 -- # set +x 00:25:36.516 [2024-04-26 21:28:25.623274] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:36.516 [2024-04-26 21:28:25.623390] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.516 [2024-04-26 21:28:25.766353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.775 [2024-04-26 21:28:25.818146] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.775 [2024-04-26 21:28:25.818193] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.775 [2024-04-26 21:28:25.818199] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.775 [2024-04-26 21:28:25.818204] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.775 [2024-04-26 21:28:25.818210] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.775 [2024-04-26 21:28:25.818235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.343 21:28:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:37.343 21:28:26 -- common/autotest_common.sh@850 -- # return 0 00:25:37.343 21:28:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:37.343 21:28:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:37.343 21:28:26 -- common/autotest_common.sh@10 -- # set +x 00:25:37.343 21:28:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.343 21:28:26 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:25:37.343 21:28:26 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:37.601 true 00:25:37.601 21:28:26 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:37.601 21:28:26 -- target/tls.sh@73 -- # jq -r .tls_version 00:25:37.890 21:28:27 -- target/tls.sh@73 -- # version=0 00:25:37.890 21:28:27 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:25:37.890 21:28:27 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:38.148 21:28:27 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:38.148 21:28:27 -- target/tls.sh@81 -- # jq -r .tls_version 00:25:38.406 21:28:27 -- target/tls.sh@81 -- # version=13 00:25:38.406 21:28:27 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:25:38.406 21:28:27 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:38.665 21:28:27 -- target/tls.sh@89 -- # jq -r .tls_version 00:25:38.665 21:28:27 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:38.923 21:28:27 -- target/tls.sh@89 -- # version=7 00:25:38.923 21:28:27 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:25:38.923 21:28:27 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:38.923 21:28:27 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:25:38.923 21:28:28 -- target/tls.sh@96 -- # ktls=false 00:25:38.923 21:28:28 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:25:38.923 21:28:28 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:39.181 21:28:28 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:39.181 21:28:28 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:25:39.441 21:28:28 -- target/tls.sh@104 -- # ktls=true 00:25:39.441 21:28:28 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:25:39.441 21:28:28 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:39.700 21:28:28 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:39.700 21:28:28 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:25:39.960 21:28:29 -- target/tls.sh@112 -- # ktls=false 00:25:39.960 21:28:29 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:25:39.960 21:28:29 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:25:39.960 21:28:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:25:39.960 21:28:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:39.960 21:28:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:25:39.960 21:28:29 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:25:39.960 21:28:29 -- nvmf/common.sh@693 -- # digest=1 00:25:39.960 21:28:29 -- nvmf/common.sh@694 -- # python - 00:25:39.960 21:28:29 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:39.960 21:28:29 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:25:39.960 21:28:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:25:39.960 21:28:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:39.960 21:28:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:25:39.960 21:28:29 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:25:39.960 21:28:29 -- nvmf/common.sh@693 -- # digest=1 00:25:39.960 21:28:29 -- nvmf/common.sh@694 -- # python - 00:25:39.960 21:28:29 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:39.960 21:28:29 -- target/tls.sh@121 -- # mktemp 00:25:39.960 21:28:29 -- target/tls.sh@121 -- # key_path=/tmp/tmp.wbEnI9rum9 00:25:39.960 21:28:29 -- target/tls.sh@122 -- # mktemp 00:25:39.960 21:28:29 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.BlL4fm0rlq 00:25:39.960 21:28:29 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:39.960 21:28:29 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:39.960 21:28:29 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.wbEnI9rum9 00:25:39.960 21:28:29 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.BlL4fm0rlq 00:25:39.960 21:28:29 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:40.219 21:28:29 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:25:40.479 21:28:29 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.wbEnI9rum9 00:25:40.479 21:28:29 -- target/tls.sh@49 -- # local key=/tmp/tmp.wbEnI9rum9 00:25:40.479 21:28:29 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:40.744 [2024-04-26 21:28:29.812484] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.744 21:28:29 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:41.009 21:28:30 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:41.010 [2024-04-26 21:28:30.235684] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:41.010 [2024-04-26 21:28:30.235856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.010 21:28:30 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:41.273 malloc0 00:25:41.273 21:28:30 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:41.532 21:28:30 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wbEnI9rum9 00:25:41.790 [2024-04-26 21:28:30.875499] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:41.790 21:28:30 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.wbEnI9rum9 00:25:53.991 Initializing NVMe Controllers 00:25:53.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:53.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:53.991 Initialization complete. Launching workers. 00:25:53.991 ======================================================== 00:25:53.991 Latency(us) 00:25:53.991 Device Information : IOPS MiB/s Average min max 00:25:53.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12793.47 49.97 5003.33 1042.24 10685.22 00:25:53.991 ======================================================== 00:25:53.991 Total : 12793.47 49.97 5003.33 1042.24 10685.22 00:25:53.991 00:25:53.991 21:28:41 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wbEnI9rum9 00:25:53.991 21:28:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:53.991 21:28:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:53.991 21:28:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:53.991 21:28:41 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wbEnI9rum9' 00:25:53.991 21:28:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:53.991 21:28:41 -- target/tls.sh@28 -- # bdevperf_pid=93545 00:25:53.991 21:28:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:53.991 21:28:41 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:53.991 21:28:41 -- target/tls.sh@31 -- # waitforlisten 93545 /var/tmp/bdevperf.sock 00:25:53.991 21:28:41 -- common/autotest_common.sh@817 -- # '[' -z 93545 ']' 00:25:53.991 21:28:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:53.991 21:28:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:53.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:53.991 21:28:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:53.991 21:28:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:53.991 21:28:41 -- common/autotest_common.sh@10 -- # set +x 00:25:53.991 [2024-04-26 21:28:41.136580] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:53.991 [2024-04-26 21:28:41.136653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93545 ] 00:25:53.991 [2024-04-26 21:28:41.276027] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.991 [2024-04-26 21:28:41.330261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.991 21:28:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:53.991 21:28:42 -- common/autotest_common.sh@850 -- # return 0 00:25:53.991 21:28:42 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wbEnI9rum9 00:25:53.991 [2024-04-26 21:28:42.317810] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:53.991 [2024-04-26 21:28:42.317917] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:53.991 TLSTESTn1 00:25:53.991 21:28:42 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:53.991 Running I/O for 10 seconds... 00:26:03.982 00:26:03.982 Latency(us) 00:26:03.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.982 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:03.982 Verification LBA range: start 0x0 length 0x2000 00:26:03.982 TLSTESTn1 : 10.01 4922.33 19.23 0.00 0.00 25957.41 5094.06 20376.26 00:26:03.982 =================================================================================================================== 00:26:03.982 Total : 4922.33 19.23 0.00 0.00 25957.41 5094.06 20376.26 00:26:03.982 0 00:26:03.982 21:28:52 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:03.982 21:28:52 -- target/tls.sh@45 -- # killprocess 93545 00:26:03.982 21:28:52 -- common/autotest_common.sh@936 -- # '[' -z 93545 ']' 00:26:03.982 21:28:52 -- common/autotest_common.sh@940 -- # kill -0 93545 00:26:03.982 21:28:52 -- common/autotest_common.sh@941 -- # uname 00:26:03.982 21:28:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:03.982 21:28:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93545 00:26:03.982 killing process with pid 93545 00:26:03.982 Received shutdown signal, test time was about 10.000000 seconds 00:26:03.982 00:26:03.982 Latency(us) 00:26:03.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.982 =================================================================================================================== 00:26:03.982 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.982 21:28:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:03.982 21:28:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:03.982 21:28:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93545' 00:26:03.982 21:28:52 -- common/autotest_common.sh@955 -- # kill 93545 00:26:03.982 [2024-04-26 21:28:52.593211] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:03.982 21:28:52 -- common/autotest_common.sh@960 -- # wait 93545 00:26:03.982 21:28:52 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BlL4fm0rlq 00:26:03.983 21:28:52 -- common/autotest_common.sh@638 -- # local es=0 00:26:03.983 21:28:52 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BlL4fm0rlq 00:26:03.983 21:28:52 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:26:03.983 21:28:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:03.983 21:28:52 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:26:03.983 21:28:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:03.983 21:28:52 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BlL4fm0rlq 00:26:03.983 21:28:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:03.983 21:28:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:03.983 21:28:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:03.983 21:28:52 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BlL4fm0rlq' 00:26:03.983 21:28:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:03.983 21:28:52 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:03.983 21:28:52 -- target/tls.sh@28 -- # bdevperf_pid=93691 00:26:03.983 21:28:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:03.983 21:28:52 -- target/tls.sh@31 -- # waitforlisten 93691 /var/tmp/bdevperf.sock 00:26:03.983 21:28:52 -- common/autotest_common.sh@817 -- # '[' -z 93691 ']' 00:26:03.983 21:28:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:03.983 21:28:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:03.983 21:28:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:03.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:03.983 21:28:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:03.983 21:28:52 -- common/autotest_common.sh@10 -- # set +x 00:26:03.983 [2024-04-26 21:28:52.827926] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:03.983 [2024-04-26 21:28:52.828093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93691 ] 00:26:03.983 [2024-04-26 21:28:52.970529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.983 [2024-04-26 21:28:53.024191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.550 21:28:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:04.550 21:28:53 -- common/autotest_common.sh@850 -- # return 0 00:26:04.550 21:28:53 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BlL4fm0rlq 00:26:04.810 [2024-04-26 21:28:53.995582] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:04.810 [2024-04-26 21:28:53.995776] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:04.810 [2024-04-26 21:28:54.000789] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:04.810 [2024-04-26 21:28:54.001397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224df60 (107): Transport endpoint is not connected 00:26:04.810 [2024-04-26 21:28:54.002378] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224df60 (9): Bad file descriptor 00:26:04.810 [2024-04-26 21:28:54.003385] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.810 [2024-04-26 21:28:54.003447] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:04.810 [2024-04-26 21:28:54.003481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.810 2024/04/26 21:28:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.BlL4fm0rlq subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:04.810 request: 00:26:04.810 { 00:26:04.810 "method": "bdev_nvme_attach_controller", 00:26:04.810 "params": { 00:26:04.810 "name": "TLSTEST", 00:26:04.810 "trtype": "tcp", 00:26:04.810 "traddr": "10.0.0.2", 00:26:04.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:04.810 "adrfam": "ipv4", 00:26:04.810 "trsvcid": "4420", 00:26:04.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:04.810 "psk": "/tmp/tmp.BlL4fm0rlq" 00:26:04.810 } 00:26:04.810 } 00:26:04.810 Got JSON-RPC error response 00:26:04.810 GoRPCClient: error on JSON-RPC call 00:26:04.810 21:28:54 -- target/tls.sh@36 -- # killprocess 93691 00:26:04.810 21:28:54 -- common/autotest_common.sh@936 -- # '[' -z 93691 ']' 00:26:04.810 21:28:54 -- common/autotest_common.sh@940 -- # kill -0 93691 00:26:04.810 21:28:54 -- common/autotest_common.sh@941 -- # uname 00:26:04.810 21:28:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:04.810 21:28:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93691 00:26:04.810 21:28:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:04.810 21:28:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:04.810 21:28:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93691' 00:26:04.810 killing process with pid 93691 00:26:04.810 21:28:54 -- common/autotest_common.sh@955 -- # kill 93691 00:26:04.810 Received shutdown signal, test time was about 10.000000 seconds 00:26:04.810 00:26:04.810 Latency(us) 00:26:04.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.810 =================================================================================================================== 00:26:04.810 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:04.810 [2024-04-26 21:28:54.060707] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:04.810 21:28:54 -- common/autotest_common.sh@960 -- # wait 93691 00:26:05.070 21:28:54 -- target/tls.sh@37 -- # return 1 00:26:05.070 21:28:54 -- common/autotest_common.sh@641 -- # es=1 00:26:05.070 21:28:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:05.070 21:28:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:05.070 21:28:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:05.070 21:28:54 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wbEnI9rum9 00:26:05.070 21:28:54 -- common/autotest_common.sh@638 -- # local es=0 00:26:05.070 21:28:54 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wbEnI9rum9 00:26:05.070 21:28:54 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:26:05.070 21:28:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.070 21:28:54 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:26:05.070 21:28:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.070 21:28:54 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.wbEnI9rum9 00:26:05.070 21:28:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:05.070 21:28:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:05.070 21:28:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:26:05.070 21:28:54 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wbEnI9rum9' 00:26:05.070 21:28:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:05.070 21:28:54 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:05.070 21:28:54 -- target/tls.sh@28 -- # bdevperf_pid=93738 00:26:05.070 21:28:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:05.070 21:28:54 -- target/tls.sh@31 -- # waitforlisten 93738 /var/tmp/bdevperf.sock 00:26:05.070 21:28:54 -- common/autotest_common.sh@817 -- # '[' -z 93738 ']' 00:26:05.070 21:28:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.070 21:28:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:05.070 21:28:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.070 21:28:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:05.070 21:28:54 -- common/autotest_common.sh@10 -- # set +x 00:26:05.070 [2024-04-26 21:28:54.296266] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:05.070 [2024-04-26 21:28:54.296887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93738 ] 00:26:05.330 [2024-04-26 21:28:54.442633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.330 [2024-04-26 21:28:54.496354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.589 21:28:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:05.589 21:28:54 -- common/autotest_common.sh@850 -- # return 0 00:26:05.589 21:28:54 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.wbEnI9rum9 00:26:05.589 [2024-04-26 21:28:54.826231] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:05.589 [2024-04-26 21:28:54.826327] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:05.589 [2024-04-26 21:28:54.831015] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:26:05.589 [2024-04-26 21:28:54.831046] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:26:05.589 [2024-04-26 21:28:54.831092] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:05.589 [2024-04-26 21:28:54.831736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4af60 (107): Transport endpoint is not connected 00:26:05.589 [2024-04-26 21:28:54.832721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4af60 (9): Bad file descriptor 00:26:05.589 [2024-04-26 21:28:54.833717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.589 [2024-04-26 21:28:54.833732] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:05.589 [2024-04-26 21:28:54.833751] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.589 2024/04/26 21:28:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.wbEnI9rum9 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:05.589 request: 00:26:05.589 { 00:26:05.589 "method": "bdev_nvme_attach_controller", 00:26:05.589 "params": { 00:26:05.589 "name": "TLSTEST", 00:26:05.589 "trtype": "tcp", 00:26:05.589 "traddr": "10.0.0.2", 00:26:05.589 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:05.589 "adrfam": "ipv4", 00:26:05.589 "trsvcid": "4420", 00:26:05.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.589 "psk": "/tmp/tmp.wbEnI9rum9" 00:26:05.589 } 00:26:05.589 } 00:26:05.589 Got JSON-RPC error response 00:26:05.589 GoRPCClient: error on JSON-RPC call 00:26:05.849 21:28:54 -- target/tls.sh@36 -- # killprocess 93738 00:26:05.849 21:28:54 -- common/autotest_common.sh@936 -- # '[' -z 93738 ']' 00:26:05.849 21:28:54 -- common/autotest_common.sh@940 -- # kill -0 93738 00:26:05.849 21:28:54 -- common/autotest_common.sh@941 -- # uname 00:26:05.849 21:28:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:05.849 21:28:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93738 00:26:05.849 21:28:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:05.849 21:28:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:05.849 killing process with pid 93738 00:26:05.849 21:28:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93738' 00:26:05.849 Received shutdown signal, test time was about 10.000000 seconds 00:26:05.849 00:26:05.849 Latency(us) 00:26:05.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.849 =================================================================================================================== 00:26:05.849 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:05.849 21:28:54 -- common/autotest_common.sh@955 -- # kill 93738 00:26:05.849 [2024-04-26 21:28:54.889327] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:05.849 21:28:54 -- common/autotest_common.sh@960 -- # wait 93738 00:26:05.849 21:28:55 -- target/tls.sh@37 -- # return 1 00:26:05.849 21:28:55 -- common/autotest_common.sh@641 -- # es=1 00:26:05.849 21:28:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:05.849 21:28:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:05.849 21:28:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:05.849 21:28:55 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wbEnI9rum9 00:26:05.849 21:28:55 -- common/autotest_common.sh@638 -- # local es=0 00:26:05.849 21:28:55 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wbEnI9rum9 00:26:05.849 21:28:55 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:26:05.849 21:28:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.849 21:28:55 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:26:05.849 21:28:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:05.849 21:28:55 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.wbEnI9rum9 00:26:05.849 21:28:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:05.849 21:28:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:26:05.849 21:28:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:05.849 21:28:55 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wbEnI9rum9' 00:26:05.849 21:28:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:05.849 21:28:55 -- target/tls.sh@28 -- # bdevperf_pid=93770 00:26:05.849 21:28:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:05.849 21:28:55 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:05.850 21:28:55 -- target/tls.sh@31 -- # waitforlisten 93770 /var/tmp/bdevperf.sock 00:26:05.850 21:28:55 -- common/autotest_common.sh@817 -- # '[' -z 93770 ']' 00:26:05.850 21:28:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.850 21:28:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:05.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.850 21:28:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.850 21:28:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:05.850 21:28:55 -- common/autotest_common.sh@10 -- # set +x 00:26:06.110 [2024-04-26 21:28:55.123587] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:06.110 [2024-04-26 21:28:55.123643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93770 ] 00:26:06.110 [2024-04-26 21:28:55.263416] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.110 [2024-04-26 21:28:55.311503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.048 21:28:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:07.048 21:28:55 -- common/autotest_common.sh@850 -- # return 0 00:26:07.048 21:28:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wbEnI9rum9 00:26:07.048 [2024-04-26 21:28:56.236590] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:07.048 [2024-04-26 21:28:56.236687] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:07.048 [2024-04-26 21:28:56.246343] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:26:07.048 [2024-04-26 21:28:56.246383] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:26:07.048 [2024-04-26 21:28:56.246434] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:07.048 [2024-04-26 21:28:56.247096] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baef60 (107): Transport endpoint is not connected 00:26:07.048 [2024-04-26 21:28:56.248081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baef60 (9): Bad file descriptor 00:26:07.048 [2024-04-26 21:28:56.249076] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:07.048 [2024-04-26 21:28:56.249093] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:07.048 [2024-04-26 21:28:56.249102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:07.048 2024/04/26 21:28:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.wbEnI9rum9 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:07.048 request: 00:26:07.048 { 00:26:07.048 "method": "bdev_nvme_attach_controller", 00:26:07.048 "params": { 00:26:07.048 "name": "TLSTEST", 00:26:07.048 "trtype": "tcp", 00:26:07.048 "traddr": "10.0.0.2", 00:26:07.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:07.048 "adrfam": "ipv4", 00:26:07.048 "trsvcid": "4420", 00:26:07.048 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:07.048 "psk": "/tmp/tmp.wbEnI9rum9" 00:26:07.048 } 00:26:07.048 } 00:26:07.048 Got JSON-RPC error response 00:26:07.048 GoRPCClient: error on JSON-RPC call 00:26:07.048 21:28:56 -- target/tls.sh@36 -- # killprocess 93770 00:26:07.048 21:28:56 -- common/autotest_common.sh@936 -- # '[' -z 93770 ']' 00:26:07.048 21:28:56 -- common/autotest_common.sh@940 -- # kill -0 93770 00:26:07.048 21:28:56 -- common/autotest_common.sh@941 -- # uname 00:26:07.048 21:28:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:07.048 21:28:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93770 00:26:07.306 21:28:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:07.306 21:28:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:07.306 killing process with pid 93770 00:26:07.306 21:28:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93770' 00:26:07.306 21:28:56 -- common/autotest_common.sh@955 -- # kill 93770 00:26:07.306 Received shutdown signal, test time was about 10.000000 seconds 00:26:07.306 00:26:07.306 Latency(us) 00:26:07.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.306 =================================================================================================================== 00:26:07.306 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:07.306 [2024-04-26 21:28:56.317209] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:07.306 21:28:56 -- common/autotest_common.sh@960 -- # wait 93770 00:26:07.306 21:28:56 -- target/tls.sh@37 -- # return 1 00:26:07.306 21:28:56 -- common/autotest_common.sh@641 -- # es=1 00:26:07.306 21:28:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:07.306 21:28:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:07.306 21:28:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:07.306 21:28:56 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:07.306 21:28:56 -- common/autotest_common.sh@638 -- # local es=0 00:26:07.306 21:28:56 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:07.306 21:28:56 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:26:07.306 21:28:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:07.306 21:28:56 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:26:07.306 21:28:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:07.306 21:28:56 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:07.306 21:28:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:07.306 21:28:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:07.306 21:28:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:07.306 21:28:56 -- target/tls.sh@23 -- # psk= 00:26:07.306 21:28:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:07.306 21:28:56 -- target/tls.sh@28 -- # bdevperf_pid=93810 00:26:07.306 21:28:56 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:07.306 21:28:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:07.306 21:28:56 -- target/tls.sh@31 -- # waitforlisten 93810 /var/tmp/bdevperf.sock 00:26:07.306 21:28:56 -- common/autotest_common.sh@817 -- # '[' -z 93810 ']' 00:26:07.306 21:28:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:07.306 21:28:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:07.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:07.306 21:28:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:07.306 21:28:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:07.306 21:28:56 -- common/autotest_common.sh@10 -- # set +x 00:26:07.307 [2024-04-26 21:28:56.553015] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:07.307 [2024-04-26 21:28:56.553115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93810 ] 00:26:07.565 [2024-04-26 21:28:56.702075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.565 [2024-04-26 21:28:56.755133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.503 21:28:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:08.503 21:28:57 -- common/autotest_common.sh@850 -- # return 0 00:26:08.503 21:28:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:08.504 [2024-04-26 21:28:57.643768] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:08.504 [2024-04-26 21:28:57.645633] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x927c30 (9): Bad file descriptor 00:26:08.504 [2024-04-26 21:28:57.646626] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.504 [2024-04-26 21:28:57.646645] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:08.504 [2024-04-26 21:28:57.646655] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.504 2024/04/26 21:28:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:08.504 request: 00:26:08.504 { 00:26:08.504 "method": "bdev_nvme_attach_controller", 00:26:08.504 "params": { 00:26:08.504 "name": "TLSTEST", 00:26:08.504 "trtype": "tcp", 00:26:08.504 "traddr": "10.0.0.2", 00:26:08.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:08.504 "adrfam": "ipv4", 00:26:08.504 "trsvcid": "4420", 00:26:08.504 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:26:08.504 } 00:26:08.504 } 00:26:08.504 Got JSON-RPC error response 00:26:08.504 GoRPCClient: error on JSON-RPC call 00:26:08.504 21:28:57 -- target/tls.sh@36 -- # killprocess 93810 00:26:08.504 21:28:57 -- common/autotest_common.sh@936 -- # '[' -z 93810 ']' 00:26:08.504 21:28:57 -- common/autotest_common.sh@940 -- # kill -0 93810 00:26:08.504 21:28:57 -- common/autotest_common.sh@941 -- # uname 00:26:08.504 21:28:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:08.504 21:28:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93810 00:26:08.504 21:28:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:08.504 21:28:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:08.504 killing process with pid 93810 00:26:08.504 21:28:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93810' 00:26:08.504 21:28:57 -- common/autotest_common.sh@955 -- # kill 93810 00:26:08.504 Received shutdown signal, test time was about 10.000000 seconds 00:26:08.504 00:26:08.504 Latency(us) 00:26:08.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.504 =================================================================================================================== 00:26:08.504 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:08.504 21:28:57 -- common/autotest_common.sh@960 -- # wait 93810 00:26:08.764 21:28:57 -- target/tls.sh@37 -- # return 1 00:26:08.764 21:28:57 -- common/autotest_common.sh@641 -- # es=1 00:26:08.764 21:28:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:08.764 21:28:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:08.764 21:28:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:08.764 21:28:57 -- target/tls.sh@158 -- # killprocess 93195 00:26:08.764 21:28:57 -- common/autotest_common.sh@936 -- # '[' -z 93195 ']' 00:26:08.764 21:28:57 -- common/autotest_common.sh@940 -- # kill -0 93195 00:26:08.764 21:28:57 -- common/autotest_common.sh@941 -- # uname 00:26:08.764 21:28:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:08.764 21:28:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93195 00:26:08.764 21:28:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:08.764 21:28:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:08.764 killing process with pid 93195 00:26:08.764 21:28:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93195' 00:26:08.764 21:28:57 -- common/autotest_common.sh@955 -- # kill 93195 00:26:08.764 [2024-04-26 21:28:57.903850] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:08.764 21:28:57 -- common/autotest_common.sh@960 -- # wait 93195 00:26:09.026 21:28:58 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:26:09.026 21:28:58 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:26:09.027 21:28:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:09.027 21:28:58 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:26:09.027 21:28:58 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:26:09.027 21:28:58 -- nvmf/common.sh@693 -- # digest=2 00:26:09.027 21:28:58 -- nvmf/common.sh@694 -- # python - 00:26:09.027 21:28:58 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:26:09.027 21:28:58 -- target/tls.sh@160 -- # mktemp 00:26:09.027 21:28:58 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.TfUETzv55G 00:26:09.027 21:28:58 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:26:09.027 21:28:58 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.TfUETzv55G 00:26:09.027 21:28:58 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:26:09.027 21:28:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:09.027 21:28:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:09.027 21:28:58 -- common/autotest_common.sh@10 -- # set +x 00:26:09.027 21:28:58 -- nvmf/common.sh@470 -- # nvmfpid=93871 00:26:09.027 21:28:58 -- nvmf/common.sh@471 -- # waitforlisten 93871 00:26:09.027 21:28:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:09.027 21:28:58 -- common/autotest_common.sh@817 -- # '[' -z 93871 ']' 00:26:09.027 21:28:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.027 21:28:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:09.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.027 21:28:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.027 21:28:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:09.027 21:28:58 -- common/autotest_common.sh@10 -- # set +x 00:26:09.027 [2024-04-26 21:28:58.233060] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:09.027 [2024-04-26 21:28:58.233123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.286 [2024-04-26 21:28:58.363092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.286 [2024-04-26 21:28:58.413726] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.286 [2024-04-26 21:28:58.413801] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.286 [2024-04-26 21:28:58.413808] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.286 [2024-04-26 21:28:58.413814] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.286 [2024-04-26 21:28:58.413819] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.286 [2024-04-26 21:28:58.413849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.225 21:28:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:10.225 21:28:59 -- common/autotest_common.sh@850 -- # return 0 00:26:10.225 21:28:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:10.225 21:28:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:10.225 21:28:59 -- common/autotest_common.sh@10 -- # set +x 00:26:10.225 21:28:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.225 21:28:59 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.TfUETzv55G 00:26:10.225 21:28:59 -- target/tls.sh@49 -- # local key=/tmp/tmp.TfUETzv55G 00:26:10.225 21:28:59 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:10.225 [2024-04-26 21:28:59.367190] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.225 21:28:59 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:10.490 21:28:59 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:10.748 [2024-04-26 21:28:59.850402] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:10.749 [2024-04-26 21:28:59.850622] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.749 21:28:59 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:11.006 malloc0 00:26:11.006 21:29:00 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:11.264 21:29:00 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TfUETzv55G 00:26:11.522 [2024-04-26 21:29:00.562297] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:11.522 21:29:00 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TfUETzv55G 00:26:11.522 21:29:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:11.522 21:29:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:11.522 21:29:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:11.522 21:29:00 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TfUETzv55G' 00:26:11.522 21:29:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:11.522 21:29:00 -- target/tls.sh@28 -- # bdevperf_pid=93968 00:26:11.522 21:29:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:11.522 21:29:00 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:11.522 21:29:00 -- target/tls.sh@31 -- # waitforlisten 93968 /var/tmp/bdevperf.sock 00:26:11.522 21:29:00 -- common/autotest_common.sh@817 -- # '[' -z 93968 ']' 00:26:11.522 21:29:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:11.522 21:29:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:11.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:11.522 21:29:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:11.522 21:29:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:11.522 21:29:00 -- common/autotest_common.sh@10 -- # set +x 00:26:11.522 [2024-04-26 21:29:00.636572] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:11.522 [2024-04-26 21:29:00.636676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93968 ] 00:26:11.522 [2024-04-26 21:29:00.769861] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.779 [2024-04-26 21:29:00.837215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.345 21:29:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:12.345 21:29:01 -- common/autotest_common.sh@850 -- # return 0 00:26:12.345 21:29:01 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TfUETzv55G 00:26:12.602 [2024-04-26 21:29:01.752228] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:12.602 [2024-04-26 21:29:01.752327] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:12.602 TLSTESTn1 00:26:12.859 21:29:01 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:12.859 Running I/O for 10 seconds... 00:26:22.840 00:26:22.840 Latency(us) 00:26:22.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.840 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:22.840 Verification LBA range: start 0x0 length 0x2000 00:26:22.840 TLSTESTn1 : 10.02 5500.28 21.49 0.00 0.00 23224.05 4607.55 16140.74 00:26:22.840 =================================================================================================================== 00:26:22.840 Total : 5500.28 21.49 0.00 0.00 23224.05 4607.55 16140.74 00:26:22.840 0 00:26:22.840 21:29:11 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.840 21:29:11 -- target/tls.sh@45 -- # killprocess 93968 00:26:22.840 21:29:11 -- common/autotest_common.sh@936 -- # '[' -z 93968 ']' 00:26:22.840 21:29:11 -- common/autotest_common.sh@940 -- # kill -0 93968 00:26:22.841 21:29:11 -- common/autotest_common.sh@941 -- # uname 00:26:22.841 21:29:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:22.841 21:29:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93968 00:26:22.841 killing process with pid 93968 00:26:22.841 Received shutdown signal, test time was about 10.000000 seconds 00:26:22.841 00:26:22.841 Latency(us) 00:26:22.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.841 =================================================================================================================== 00:26:22.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:22.841 21:29:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:22.841 21:29:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:22.841 21:29:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93968' 00:26:22.841 21:29:11 -- common/autotest_common.sh@955 -- # kill 93968 00:26:22.841 [2024-04-26 21:29:11.996267] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:22.841 21:29:11 -- common/autotest_common.sh@960 -- # wait 93968 00:26:23.142 21:29:12 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.TfUETzv55G 00:26:23.142 21:29:12 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TfUETzv55G 00:26:23.142 21:29:12 -- common/autotest_common.sh@638 -- # local es=0 00:26:23.142 21:29:12 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TfUETzv55G 00:26:23.142 21:29:12 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:26:23.142 21:29:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:23.142 21:29:12 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:26:23.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:23.142 21:29:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:23.142 21:29:12 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TfUETzv55G 00:26:23.142 21:29:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:23.142 21:29:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:23.142 21:29:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:23.142 21:29:12 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TfUETzv55G' 00:26:23.142 21:29:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:23.142 21:29:12 -- target/tls.sh@28 -- # bdevperf_pid=94115 00:26:23.142 21:29:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:23.142 21:29:12 -- target/tls.sh@31 -- # waitforlisten 94115 /var/tmp/bdevperf.sock 00:26:23.142 21:29:12 -- common/autotest_common.sh@817 -- # '[' -z 94115 ']' 00:26:23.142 21:29:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:23.142 21:29:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:23.142 21:29:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:23.142 21:29:12 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:23.142 21:29:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:23.142 21:29:12 -- common/autotest_common.sh@10 -- # set +x 00:26:23.142 [2024-04-26 21:29:12.244314] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:23.142 [2024-04-26 21:29:12.244495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94115 ] 00:26:23.142 [2024-04-26 21:29:12.372629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.402 [2024-04-26 21:29:12.424267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.971 21:29:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:23.971 21:29:13 -- common/autotest_common.sh@850 -- # return 0 00:26:23.971 21:29:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TfUETzv55G 00:26:24.231 [2024-04-26 21:29:13.354389] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:24.231 [2024-04-26 21:29:13.354451] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:24.231 [2024-04-26 21:29:13.354459] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.TfUETzv55G 00:26:24.231 2024/04/26 21:29:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.TfUETzv55G subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:26:24.231 request: 00:26:24.231 { 00:26:24.231 "method": "bdev_nvme_attach_controller", 00:26:24.231 "params": { 00:26:24.231 "name": "TLSTEST", 00:26:24.231 "trtype": "tcp", 00:26:24.231 "traddr": "10.0.0.2", 00:26:24.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:24.231 "adrfam": "ipv4", 00:26:24.231 "trsvcid": "4420", 00:26:24.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.231 "psk": "/tmp/tmp.TfUETzv55G" 00:26:24.231 } 00:26:24.231 } 00:26:24.231 Got JSON-RPC error response 00:26:24.231 GoRPCClient: error on JSON-RPC call 00:26:24.231 21:29:13 -- target/tls.sh@36 -- # killprocess 94115 00:26:24.231 21:29:13 -- common/autotest_common.sh@936 -- # '[' -z 94115 ']' 00:26:24.231 21:29:13 -- common/autotest_common.sh@940 -- # kill -0 94115 00:26:24.231 21:29:13 -- common/autotest_common.sh@941 -- # uname 00:26:24.231 21:29:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:24.231 21:29:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94115 00:26:24.231 killing process with pid 94115 00:26:24.231 Received shutdown signal, test time was about 10.000000 seconds 00:26:24.231 00:26:24.231 Latency(us) 00:26:24.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.231 =================================================================================================================== 00:26:24.231 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:24.231 21:29:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:24.231 21:29:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:24.231 21:29:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94115' 00:26:24.231 21:29:13 -- common/autotest_common.sh@955 -- # kill 94115 00:26:24.231 21:29:13 -- common/autotest_common.sh@960 -- # wait 94115 00:26:24.505 21:29:13 -- target/tls.sh@37 -- # return 1 00:26:24.505 21:29:13 -- common/autotest_common.sh@641 -- # es=1 00:26:24.505 21:29:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:24.505 21:29:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:24.505 21:29:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:24.505 21:29:13 -- target/tls.sh@174 -- # killprocess 93871 00:26:24.505 21:29:13 -- common/autotest_common.sh@936 -- # '[' -z 93871 ']' 00:26:24.506 21:29:13 -- common/autotest_common.sh@940 -- # kill -0 93871 00:26:24.506 21:29:13 -- common/autotest_common.sh@941 -- # uname 00:26:24.506 21:29:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:24.506 21:29:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93871 00:26:24.506 21:29:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:24.506 21:29:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:24.506 21:29:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93871' 00:26:24.506 killing process with pid 93871 00:26:24.506 21:29:13 -- common/autotest_common.sh@955 -- # kill 93871 00:26:24.506 [2024-04-26 21:29:13.616886] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:24.506 21:29:13 -- common/autotest_common.sh@960 -- # wait 93871 00:26:24.765 21:29:13 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:26:24.765 21:29:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:24.765 21:29:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:24.765 21:29:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.765 21:29:13 -- nvmf/common.sh@470 -- # nvmfpid=94167 00:26:24.765 21:29:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:24.765 21:29:13 -- nvmf/common.sh@471 -- # waitforlisten 94167 00:26:24.765 21:29:13 -- common/autotest_common.sh@817 -- # '[' -z 94167 ']' 00:26:24.765 21:29:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.765 21:29:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:24.765 21:29:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.765 21:29:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:24.765 21:29:13 -- common/autotest_common.sh@10 -- # set +x 00:26:24.765 [2024-04-26 21:29:13.876682] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:24.765 [2024-04-26 21:29:13.876759] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.765 [2024-04-26 21:29:14.002832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.024 [2024-04-26 21:29:14.064565] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.024 [2024-04-26 21:29:14.064612] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.024 [2024-04-26 21:29:14.064636] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.024 [2024-04-26 21:29:14.064642] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.024 [2024-04-26 21:29:14.064647] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.024 [2024-04-26 21:29:14.064684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.590 21:29:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:25.590 21:29:14 -- common/autotest_common.sh@850 -- # return 0 00:26:25.590 21:29:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:25.590 21:29:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:25.590 21:29:14 -- common/autotest_common.sh@10 -- # set +x 00:26:25.590 21:29:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.590 21:29:14 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.TfUETzv55G 00:26:25.590 21:29:14 -- common/autotest_common.sh@638 -- # local es=0 00:26:25.590 21:29:14 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.TfUETzv55G 00:26:25.590 21:29:14 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:26:25.590 21:29:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:25.590 21:29:14 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:26:25.590 21:29:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:25.590 21:29:14 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.TfUETzv55G 00:26:25.590 21:29:14 -- target/tls.sh@49 -- # local key=/tmp/tmp.TfUETzv55G 00:26:25.590 21:29:14 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:25.849 [2024-04-26 21:29:15.008158] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.849 21:29:15 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:26.109 21:29:15 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:26.368 [2024-04-26 21:29:15.467388] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:26.368 [2024-04-26 21:29:15.467580] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.368 21:29:15 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:26.628 malloc0 00:26:26.628 21:29:15 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:26.887 21:29:15 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TfUETzv55G 00:26:26.887 [2024-04-26 21:29:16.091443] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:26.887 [2024-04-26 21:29:16.091489] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:26:26.887 [2024-04-26 21:29:16.091511] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:26:26.887 2024/04/26 21:29:16 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.TfUETzv55G], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:26:26.887 request: 00:26:26.887 { 00:26:26.887 "method": "nvmf_subsystem_add_host", 00:26:26.887 "params": { 00:26:26.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:26.887 "host": "nqn.2016-06.io.spdk:host1", 00:26:26.887 "psk": "/tmp/tmp.TfUETzv55G" 00:26:26.887 } 00:26:26.887 } 00:26:26.887 Got JSON-RPC error response 00:26:26.887 GoRPCClient: error on JSON-RPC call 00:26:26.887 21:29:16 -- common/autotest_common.sh@641 -- # es=1 00:26:26.887 21:29:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:26.887 21:29:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:26.887 21:29:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:26.887 21:29:16 -- target/tls.sh@180 -- # killprocess 94167 00:26:26.887 21:29:16 -- common/autotest_common.sh@936 -- # '[' -z 94167 ']' 00:26:26.887 21:29:16 -- common/autotest_common.sh@940 -- # kill -0 94167 00:26:26.887 21:29:16 -- common/autotest_common.sh@941 -- # uname 00:26:27.146 21:29:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:27.146 21:29:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94167 00:26:27.146 killing process with pid 94167 00:26:27.146 21:29:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:27.146 21:29:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:27.146 21:29:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94167' 00:26:27.146 21:29:16 -- common/autotest_common.sh@955 -- # kill 94167 00:26:27.146 21:29:16 -- common/autotest_common.sh@960 -- # wait 94167 00:26:27.146 21:29:16 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.TfUETzv55G 00:26:27.146 21:29:16 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:26:27.146 21:29:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:27.146 21:29:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:27.146 21:29:16 -- common/autotest_common.sh@10 -- # set +x 00:26:27.146 21:29:16 -- nvmf/common.sh@470 -- # nvmfpid=94276 00:26:27.147 21:29:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:27.147 21:29:16 -- nvmf/common.sh@471 -- # waitforlisten 94276 00:26:27.147 21:29:16 -- common/autotest_common.sh@817 -- # '[' -z 94276 ']' 00:26:27.147 21:29:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.147 21:29:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:27.147 21:29:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.147 21:29:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:27.147 21:29:16 -- common/autotest_common.sh@10 -- # set +x 00:26:27.404 [2024-04-26 21:29:16.424284] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:27.404 [2024-04-26 21:29:16.424366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.404 [2024-04-26 21:29:16.562420] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.404 [2024-04-26 21:29:16.615413] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.404 [2024-04-26 21:29:16.615460] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.404 [2024-04-26 21:29:16.615467] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.404 [2024-04-26 21:29:16.615473] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.404 [2024-04-26 21:29:16.615478] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.404 [2024-04-26 21:29:16.615509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.340 21:29:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:28.340 21:29:17 -- common/autotest_common.sh@850 -- # return 0 00:26:28.340 21:29:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:28.340 21:29:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:28.340 21:29:17 -- common/autotest_common.sh@10 -- # set +x 00:26:28.340 21:29:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.340 21:29:17 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.TfUETzv55G 00:26:28.340 21:29:17 -- target/tls.sh@49 -- # local key=/tmp/tmp.TfUETzv55G 00:26:28.340 21:29:17 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:28.599 [2024-04-26 21:29:17.599652] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.599 21:29:17 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:28.858 21:29:17 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:28.858 [2024-04-26 21:29:18.054881] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:28.858 [2024-04-26 21:29:18.055072] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.858 21:29:18 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:29.116 malloc0 00:26:29.116 21:29:18 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:29.374 21:29:18 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TfUETzv55G 00:26:29.632 [2024-04-26 21:29:18.783049] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:29.632 21:29:18 -- target/tls.sh@188 -- # bdevperf_pid=94379 00:26:29.632 21:29:18 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:29.632 21:29:18 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:29.632 21:29:18 -- target/tls.sh@191 -- # waitforlisten 94379 /var/tmp/bdevperf.sock 00:26:29.632 21:29:18 -- common/autotest_common.sh@817 -- # '[' -z 94379 ']' 00:26:29.632 21:29:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:29.632 21:29:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:29.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:29.632 21:29:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:29.632 21:29:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:29.632 21:29:18 -- common/autotest_common.sh@10 -- # set +x 00:26:29.632 [2024-04-26 21:29:18.858405] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:29.632 [2024-04-26 21:29:18.858471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94379 ] 00:26:29.890 [2024-04-26 21:29:18.997862] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.890 [2024-04-26 21:29:19.049759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.827 21:29:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:30.827 21:29:19 -- common/autotest_common.sh@850 -- # return 0 00:26:30.827 21:29:19 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TfUETzv55G 00:26:30.827 [2024-04-26 21:29:19.979520] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:30.827 [2024-04-26 21:29:19.979618] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:30.827 TLSTESTn1 00:26:31.086 21:29:20 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:31.344 21:29:20 -- target/tls.sh@196 -- # tgtconf='{ 00:26:31.344 "subsystems": [ 00:26:31.344 { 00:26:31.344 "subsystem": "keyring", 00:26:31.344 "config": [] 00:26:31.344 }, 00:26:31.344 { 00:26:31.344 "subsystem": "iobuf", 00:26:31.344 "config": [ 00:26:31.344 { 00:26:31.344 "method": "iobuf_set_options", 00:26:31.344 "params": { 00:26:31.344 "large_bufsize": 135168, 00:26:31.344 "large_pool_count": 1024, 00:26:31.344 "small_bufsize": 8192, 00:26:31.344 "small_pool_count": 8192 00:26:31.344 } 00:26:31.344 } 00:26:31.344 ] 00:26:31.344 }, 00:26:31.344 { 00:26:31.344 "subsystem": "sock", 00:26:31.344 "config": [ 00:26:31.344 { 00:26:31.344 "method": "sock_impl_set_options", 00:26:31.344 "params": { 00:26:31.344 "enable_ktls": false, 00:26:31.344 "enable_placement_id": 0, 00:26:31.344 "enable_quickack": false, 00:26:31.344 "enable_recv_pipe": true, 00:26:31.344 "enable_zerocopy_send_client": false, 00:26:31.344 "enable_zerocopy_send_server": true, 00:26:31.344 "impl_name": "posix", 00:26:31.344 "recv_buf_size": 2097152, 00:26:31.344 "send_buf_size": 2097152, 00:26:31.344 "tls_version": 0, 00:26:31.344 "zerocopy_threshold": 0 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "sock_impl_set_options", 00:26:31.345 "params": { 00:26:31.345 "enable_ktls": false, 00:26:31.345 "enable_placement_id": 0, 00:26:31.345 "enable_quickack": false, 00:26:31.345 "enable_recv_pipe": true, 00:26:31.345 "enable_zerocopy_send_client": false, 00:26:31.345 "enable_zerocopy_send_server": true, 00:26:31.345 "impl_name": "ssl", 00:26:31.345 "recv_buf_size": 4096, 00:26:31.345 "send_buf_size": 4096, 00:26:31.345 "tls_version": 0, 00:26:31.345 "zerocopy_threshold": 0 00:26:31.345 } 00:26:31.345 } 00:26:31.345 ] 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "subsystem": "vmd", 00:26:31.345 "config": [] 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "subsystem": "accel", 00:26:31.345 "config": [ 00:26:31.345 { 00:26:31.345 "method": "accel_set_options", 00:26:31.345 "params": { 00:26:31.345 "buf_count": 2048, 00:26:31.345 "large_cache_size": 16, 00:26:31.345 "sequence_count": 2048, 00:26:31.345 "small_cache_size": 128, 00:26:31.345 "task_count": 2048 00:26:31.345 } 00:26:31.345 } 00:26:31.345 ] 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "subsystem": "bdev", 00:26:31.345 "config": [ 00:26:31.345 { 00:26:31.345 "method": "bdev_set_options", 00:26:31.345 "params": { 00:26:31.345 "bdev_auto_examine": true, 00:26:31.345 "bdev_io_cache_size": 256, 00:26:31.345 "bdev_io_pool_size": 65535, 00:26:31.345 "iobuf_large_cache_size": 16, 00:26:31.345 "iobuf_small_cache_size": 128 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "bdev_raid_set_options", 00:26:31.345 "params": { 00:26:31.345 "process_window_size_kb": 1024 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "bdev_iscsi_set_options", 00:26:31.345 "params": { 00:26:31.345 "timeout_sec": 30 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "bdev_nvme_set_options", 00:26:31.345 "params": { 00:26:31.345 "action_on_timeout": "none", 00:26:31.345 "allow_accel_sequence": false, 00:26:31.345 "arbitration_burst": 0, 00:26:31.345 "bdev_retry_count": 3, 00:26:31.345 "ctrlr_loss_timeout_sec": 0, 00:26:31.345 "delay_cmd_submit": true, 00:26:31.345 "dhchap_dhgroups": [ 00:26:31.345 "null", 00:26:31.345 "ffdhe2048", 00:26:31.345 "ffdhe3072", 00:26:31.345 "ffdhe4096", 00:26:31.345 "ffdhe6144", 00:26:31.345 "ffdhe8192" 00:26:31.345 ], 00:26:31.345 "dhchap_digests": [ 00:26:31.345 "sha256", 00:26:31.345 "sha384", 00:26:31.345 "sha512" 00:26:31.345 ], 00:26:31.345 "disable_auto_failback": false, 00:26:31.345 "fast_io_fail_timeout_sec": 0, 00:26:31.345 "generate_uuids": false, 00:26:31.345 "high_priority_weight": 0, 00:26:31.345 "io_path_stat": false, 00:26:31.345 "io_queue_requests": 0, 00:26:31.345 "keep_alive_timeout_ms": 10000, 00:26:31.345 "low_priority_weight": 0, 00:26:31.345 "medium_priority_weight": 0, 00:26:31.345 "nvme_adminq_poll_period_us": 10000, 00:26:31.345 "nvme_error_stat": false, 00:26:31.345 "nvme_ioq_poll_period_us": 0, 00:26:31.345 "rdma_cm_event_timeout_ms": 0, 00:26:31.345 "rdma_max_cq_size": 0, 00:26:31.345 "rdma_srq_size": 0, 00:26:31.345 "reconnect_delay_sec": 0, 00:26:31.345 "timeout_admin_us": 0, 00:26:31.345 "timeout_us": 0, 00:26:31.345 "transport_ack_timeout": 0, 00:26:31.345 "transport_retry_count": 4, 00:26:31.345 "transport_tos": 0 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "bdev_nvme_set_hotplug", 00:26:31.345 "params": { 00:26:31.345 "enable": false, 00:26:31.345 "period_us": 100000 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "bdev_malloc_create", 00:26:31.345 "params": { 00:26:31.345 "block_size": 4096, 00:26:31.345 "name": "malloc0", 00:26:31.345 "num_blocks": 8192, 00:26:31.345 "optimal_io_boundary": 0, 00:26:31.345 "physical_block_size": 4096, 00:26:31.345 "uuid": "beb43099-0bee-451f-be4a-f2d5ec55f485" 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "bdev_wait_for_examine" 00:26:31.345 } 00:26:31.345 ] 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "subsystem": "nbd", 00:26:31.345 "config": [] 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "subsystem": "scheduler", 00:26:31.345 "config": [ 00:26:31.345 { 00:26:31.345 "method": "framework_set_scheduler", 00:26:31.345 "params": { 00:26:31.345 "name": "static" 00:26:31.345 } 00:26:31.345 } 00:26:31.345 ] 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "subsystem": "nvmf", 00:26:31.345 "config": [ 00:26:31.345 { 00:26:31.345 "method": "nvmf_set_config", 00:26:31.345 "params": { 00:26:31.345 "admin_cmd_passthru": { 00:26:31.345 "identify_ctrlr": false 00:26:31.345 }, 00:26:31.345 "discovery_filter": "match_any" 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "nvmf_set_max_subsystems", 00:26:31.345 "params": { 00:26:31.345 "max_subsystems": 1024 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "nvmf_set_crdt", 00:26:31.345 "params": { 00:26:31.345 "crdt1": 0, 00:26:31.345 "crdt2": 0, 00:26:31.345 "crdt3": 0 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "nvmf_create_transport", 00:26:31.345 "params": { 00:26:31.345 "abort_timeout_sec": 1, 00:26:31.345 "ack_timeout": 0, 00:26:31.345 "buf_cache_size": 4294967295, 00:26:31.345 "c2h_success": false, 00:26:31.345 "data_wr_pool_size": 0, 00:26:31.345 "dif_insert_or_strip": false, 00:26:31.345 "in_capsule_data_size": 4096, 00:26:31.345 "io_unit_size": 131072, 00:26:31.345 "max_aq_depth": 128, 00:26:31.345 "max_io_qpairs_per_ctrlr": 127, 00:26:31.345 "max_io_size": 131072, 00:26:31.345 "max_queue_depth": 128, 00:26:31.345 "num_shared_buffers": 511, 00:26:31.345 "sock_priority": 0, 00:26:31.345 "trtype": "TCP", 00:26:31.345 "zcopy": false 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "nvmf_create_subsystem", 00:26:31.345 "params": { 00:26:31.345 "allow_any_host": false, 00:26:31.345 "ana_reporting": false, 00:26:31.345 "max_cntlid": 65519, 00:26:31.345 "max_namespaces": 10, 00:26:31.345 "min_cntlid": 1, 00:26:31.345 "model_number": "SPDK bdev Controller", 00:26:31.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.345 "serial_number": "SPDK00000000000001" 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "nvmf_subsystem_add_host", 00:26:31.345 "params": { 00:26:31.345 "host": "nqn.2016-06.io.spdk:host1", 00:26:31.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.345 "psk": "/tmp/tmp.TfUETzv55G" 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "nvmf_subsystem_add_ns", 00:26:31.345 "params": { 00:26:31.345 "namespace": { 00:26:31.345 "bdev_name": "malloc0", 00:26:31.345 "nguid": "BEB430990BEE451FBE4AF2D5EC55F485", 00:26:31.345 "no_auto_visible": false, 00:26:31.345 "nsid": 1, 00:26:31.345 "uuid": "beb43099-0bee-451f-be4a-f2d5ec55f485" 00:26:31.345 }, 00:26:31.345 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:26:31.345 } 00:26:31.345 }, 00:26:31.345 { 00:26:31.345 "method": "nvmf_subsystem_add_listener", 00:26:31.345 "params": { 00:26:31.345 "listen_address": { 00:26:31.345 "adrfam": "IPv4", 00:26:31.345 "traddr": "10.0.0.2", 00:26:31.345 "trsvcid": "4420", 00:26:31.345 "trtype": "TCP" 00:26:31.345 }, 00:26:31.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.345 "secure_channel": true 00:26:31.345 } 00:26:31.345 } 00:26:31.345 ] 00:26:31.345 } 00:26:31.345 ] 00:26:31.345 }' 00:26:31.345 21:29:20 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:31.605 21:29:20 -- target/tls.sh@197 -- # bdevperfconf='{ 00:26:31.605 "subsystems": [ 00:26:31.605 { 00:26:31.605 "subsystem": "keyring", 00:26:31.605 "config": [] 00:26:31.605 }, 00:26:31.605 { 00:26:31.605 "subsystem": "iobuf", 00:26:31.605 "config": [ 00:26:31.605 { 00:26:31.605 "method": "iobuf_set_options", 00:26:31.605 "params": { 00:26:31.605 "large_bufsize": 135168, 00:26:31.605 "large_pool_count": 1024, 00:26:31.605 "small_bufsize": 8192, 00:26:31.605 "small_pool_count": 8192 00:26:31.605 } 00:26:31.605 } 00:26:31.605 ] 00:26:31.605 }, 00:26:31.605 { 00:26:31.605 "subsystem": "sock", 00:26:31.605 "config": [ 00:26:31.605 { 00:26:31.605 "method": "sock_impl_set_options", 00:26:31.605 "params": { 00:26:31.605 "enable_ktls": false, 00:26:31.605 "enable_placement_id": 0, 00:26:31.605 "enable_quickack": false, 00:26:31.605 "enable_recv_pipe": true, 00:26:31.605 "enable_zerocopy_send_client": false, 00:26:31.605 "enable_zerocopy_send_server": true, 00:26:31.605 "impl_name": "posix", 00:26:31.605 "recv_buf_size": 2097152, 00:26:31.605 "send_buf_size": 2097152, 00:26:31.605 "tls_version": 0, 00:26:31.605 "zerocopy_threshold": 0 00:26:31.605 } 00:26:31.605 }, 00:26:31.605 { 00:26:31.605 "method": "sock_impl_set_options", 00:26:31.605 "params": { 00:26:31.605 "enable_ktls": false, 00:26:31.605 "enable_placement_id": 0, 00:26:31.605 "enable_quickack": false, 00:26:31.605 "enable_recv_pipe": true, 00:26:31.605 "enable_zerocopy_send_client": false, 00:26:31.605 "enable_zerocopy_send_server": true, 00:26:31.605 "impl_name": "ssl", 00:26:31.605 "recv_buf_size": 4096, 00:26:31.605 "send_buf_size": 4096, 00:26:31.605 "tls_version": 0, 00:26:31.605 "zerocopy_threshold": 0 00:26:31.605 } 00:26:31.605 } 00:26:31.605 ] 00:26:31.605 }, 00:26:31.605 { 00:26:31.605 "subsystem": "vmd", 00:26:31.606 "config": [] 00:26:31.606 }, 00:26:31.606 { 00:26:31.606 "subsystem": "accel", 00:26:31.606 "config": [ 00:26:31.606 { 00:26:31.606 "method": "accel_set_options", 00:26:31.606 "params": { 00:26:31.606 "buf_count": 2048, 00:26:31.606 "large_cache_size": 16, 00:26:31.606 "sequence_count": 2048, 00:26:31.606 "small_cache_size": 128, 00:26:31.606 "task_count": 2048 00:26:31.606 } 00:26:31.606 } 00:26:31.606 ] 00:26:31.606 }, 00:26:31.606 { 00:26:31.606 "subsystem": "bdev", 00:26:31.606 "config": [ 00:26:31.606 { 00:26:31.606 "method": "bdev_set_options", 00:26:31.606 "params": { 00:26:31.606 "bdev_auto_examine": true, 00:26:31.606 "bdev_io_cache_size": 256, 00:26:31.606 "bdev_io_pool_size": 65535, 00:26:31.606 "iobuf_large_cache_size": 16, 00:26:31.606 "iobuf_small_cache_size": 128 00:26:31.606 } 00:26:31.606 }, 00:26:31.606 { 00:26:31.606 "method": "bdev_raid_set_options", 00:26:31.606 "params": { 00:26:31.606 "process_window_size_kb": 1024 00:26:31.606 } 00:26:31.606 }, 00:26:31.606 { 00:26:31.606 "method": "bdev_iscsi_set_options", 00:26:31.606 "params": { 00:26:31.606 "timeout_sec": 30 00:26:31.606 } 00:26:31.606 }, 00:26:31.606 { 00:26:31.606 "method": "bdev_nvme_set_options", 00:26:31.606 "params": { 00:26:31.606 "action_on_timeout": "none", 00:26:31.606 "allow_accel_sequence": false, 00:26:31.606 "arbitration_burst": 0, 00:26:31.606 "bdev_retry_count": 3, 00:26:31.606 "ctrlr_loss_timeout_sec": 0, 00:26:31.606 "delay_cmd_submit": true, 00:26:31.606 "dhchap_dhgroups": [ 00:26:31.606 "null", 00:26:31.606 "ffdhe2048", 00:26:31.606 "ffdhe3072", 00:26:31.606 "ffdhe4096", 00:26:31.606 "ffdhe6144", 00:26:31.606 "ffdhe8192" 00:26:31.606 ], 00:26:31.606 "dhchap_digests": [ 00:26:31.606 "sha256", 00:26:31.606 "sha384", 00:26:31.606 "sha512" 00:26:31.606 ], 00:26:31.606 "disable_auto_failback": false, 00:26:31.606 "fast_io_fail_timeout_sec": 0, 00:26:31.606 "generate_uuids": false, 00:26:31.606 "high_priority_weight": 0, 00:26:31.606 "io_path_stat": false, 00:26:31.606 "io_queue_requests": 512, 00:26:31.606 "keep_alive_timeout_ms": 10000, 00:26:31.606 "low_priority_weight": 0, 00:26:31.606 "medium_priority_weight": 0, 00:26:31.606 "nvme_adminq_poll_period_us": 10000, 00:26:31.606 "nvme_error_stat": false, 00:26:31.606 "nvme_ioq_poll_period_us": 0, 00:26:31.606 "rdma_cm_event_timeout_ms": 0, 00:26:31.606 "rdma_max_cq_size": 0, 00:26:31.606 "rdma_srq_size": 0, 00:26:31.606 "reconnect_delay_sec": 0, 00:26:31.606 "timeout_admin_us": 0, 00:26:31.606 "timeout_us": 0, 00:26:31.606 "transport_ack_timeout": 0, 00:26:31.606 "transport_retry_count": 4, 00:26:31.606 "transport_tos": 0 00:26:31.606 } 00:26:31.606 }, 00:26:31.606 { 00:26:31.606 "method": "bdev_nvme_attach_controller", 00:26:31.606 "params": { 00:26:31.606 "adrfam": "IPv4", 00:26:31.606 "ctrlr_loss_timeout_sec": 0, 00:26:31.606 "ddgst": false, 00:26:31.606 "fast_io_fail_timeout_sec": 0, 00:26:31.606 "hdgst": false, 00:26:31.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:31.606 "name": "TLSTEST", 00:26:31.606 "prchk_guard": false, 00:26:31.606 "prchk_reftag": false, 00:26:31.606 "psk": "/tmp/tmp.TfUETzv55G", 00:26:31.606 "reconnect_delay_sec": 0, 00:26:31.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.606 "traddr": "10.0.0.2", 00:26:31.606 "trsvcid": "4420", 00:26:31.606 "trtype": "TCP" 00:26:31.606 } 00:26:31.606 }, 00:26:31.606 { 00:26:31.606 "method": "bdev_nvme_set_hotplug", 00:26:31.606 "params": { 00:26:31.606 "enable": false, 00:26:31.606 "period_us": 100000 00:26:31.606 } 00:26:31.606 }, 00:26:31.606 { 00:26:31.606 "method": "bdev_wait_for_examine" 00:26:31.606 } 00:26:31.606 ] 00:26:31.606 }, 00:26:31.606 { 00:26:31.606 "subsystem": "nbd", 00:26:31.606 "config": [] 00:26:31.606 } 00:26:31.606 ] 00:26:31.606 }' 00:26:31.606 21:29:20 -- target/tls.sh@199 -- # killprocess 94379 00:26:31.606 21:29:20 -- common/autotest_common.sh@936 -- # '[' -z 94379 ']' 00:26:31.606 21:29:20 -- common/autotest_common.sh@940 -- # kill -0 94379 00:26:31.606 21:29:20 -- common/autotest_common.sh@941 -- # uname 00:26:31.606 21:29:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:31.606 21:29:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94379 00:26:31.606 21:29:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:31.606 21:29:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:31.606 killing process with pid 94379 00:26:31.606 21:29:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94379' 00:26:31.606 21:29:20 -- common/autotest_common.sh@955 -- # kill 94379 00:26:31.606 Received shutdown signal, test time was about 10.000000 seconds 00:26:31.606 00:26:31.606 Latency(us) 00:26:31.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.606 =================================================================================================================== 00:26:31.606 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:31.606 [2024-04-26 21:29:20.784096] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:31.606 21:29:20 -- common/autotest_common.sh@960 -- # wait 94379 00:26:31.864 21:29:20 -- target/tls.sh@200 -- # killprocess 94276 00:26:31.864 21:29:20 -- common/autotest_common.sh@936 -- # '[' -z 94276 ']' 00:26:31.864 21:29:20 -- common/autotest_common.sh@940 -- # kill -0 94276 00:26:31.864 21:29:20 -- common/autotest_common.sh@941 -- # uname 00:26:31.864 21:29:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:31.864 21:29:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94276 00:26:31.864 21:29:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:31.864 21:29:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:31.864 21:29:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94276' 00:26:31.864 killing process with pid 94276 00:26:31.865 21:29:21 -- common/autotest_common.sh@955 -- # kill 94276 00:26:31.865 [2024-04-26 21:29:21.009874] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:31.865 21:29:21 -- common/autotest_common.sh@960 -- # wait 94276 00:26:32.127 21:29:21 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:32.127 21:29:21 -- target/tls.sh@203 -- # echo '{ 00:26:32.127 "subsystems": [ 00:26:32.127 { 00:26:32.127 "subsystem": "keyring", 00:26:32.127 "config": [] 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "subsystem": "iobuf", 00:26:32.127 "config": [ 00:26:32.127 { 00:26:32.127 "method": "iobuf_set_options", 00:26:32.127 "params": { 00:26:32.127 "large_bufsize": 135168, 00:26:32.127 "large_pool_count": 1024, 00:26:32.127 "small_bufsize": 8192, 00:26:32.127 "small_pool_count": 8192 00:26:32.127 } 00:26:32.127 } 00:26:32.127 ] 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "subsystem": "sock", 00:26:32.127 "config": [ 00:26:32.127 { 00:26:32.127 "method": "sock_impl_set_options", 00:26:32.127 "params": { 00:26:32.127 "enable_ktls": false, 00:26:32.127 "enable_placement_id": 0, 00:26:32.127 "enable_quickack": false, 00:26:32.127 "enable_recv_pipe": true, 00:26:32.127 "enable_zerocopy_send_client": false, 00:26:32.127 "enable_zerocopy_send_server": true, 00:26:32.127 "impl_name": "posix", 00:26:32.127 "recv_buf_size": 2097152, 00:26:32.127 "send_buf_size": 2097152, 00:26:32.127 "tls_version": 0, 00:26:32.127 "zerocopy_threshold": 0 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "sock_impl_set_options", 00:26:32.127 "params": { 00:26:32.127 "enable_ktls": false, 00:26:32.127 "enable_placement_id": 0, 00:26:32.127 "enable_quickack": false, 00:26:32.127 "enable_recv_pipe": true, 00:26:32.127 "enable_zerocopy_send_client": false, 00:26:32.127 "enable_zerocopy_send_server": true, 00:26:32.127 "impl_name": "ssl", 00:26:32.127 "recv_buf_size": 4096, 00:26:32.127 "send_buf_size": 4096, 00:26:32.127 "tls_version": 0, 00:26:32.127 "zerocopy_threshold": 0 00:26:32.127 } 00:26:32.127 } 00:26:32.127 ] 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "subsystem": "vmd", 00:26:32.127 "config": [] 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "subsystem": "accel", 00:26:32.127 "config": [ 00:26:32.127 { 00:26:32.127 "method": "accel_set_options", 00:26:32.127 "params": { 00:26:32.127 "buf_count": 2048, 00:26:32.127 "large_cache_size": 16, 00:26:32.127 "sequence_count": 2048, 00:26:32.127 "small_cache_size": 128, 00:26:32.127 "task_count": 2048 00:26:32.127 } 00:26:32.127 } 00:26:32.127 ] 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "subsystem": "bdev", 00:26:32.127 "config": [ 00:26:32.127 { 00:26:32.127 "method": "bdev_set_options", 00:26:32.127 "params": { 00:26:32.127 "bdev_auto_examine": true, 00:26:32.127 "bdev_io_cache_size": 256, 00:26:32.127 "bdev_io_pool_size": 65535, 00:26:32.127 "iobuf_large_cache_size": 16, 00:26:32.127 "iobuf_small_cache_size": 128 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "bdev_raid_set_options", 00:26:32.127 "params": { 00:26:32.127 "process_window_size_kb": 1024 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "bdev_iscsi_set_options", 00:26:32.127 "params": { 00:26:32.127 "timeout_sec": 30 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "bdev_nvme_set_options", 00:26:32.127 "params": { 00:26:32.127 "action_on_timeout": "none", 00:26:32.127 "allow_accel_sequence": false, 00:26:32.127 "arbitration_burst": 0, 00:26:32.127 "bdev_retry_count": 3, 00:26:32.127 "ctrlr_loss_timeout_sec": 0, 00:26:32.127 "delay_cmd_submit": true, 00:26:32.127 "dhchap_dhgroups": [ 00:26:32.127 "null", 00:26:32.127 "ffdhe2048", 00:26:32.127 "ffdhe3072", 00:26:32.127 "ffdhe4096", 00:26:32.127 "ffdhe6144", 00:26:32.127 "ffdhe8192" 00:26:32.127 ], 00:26:32.127 "dhchap_digests": [ 00:26:32.127 "sha256", 00:26:32.127 "sha384", 00:26:32.127 "sha512" 00:26:32.127 ], 00:26:32.127 "disable_auto_failback": false, 00:26:32.127 "fast_io_fail_timeout_sec": 0, 00:26:32.127 "generate_uuids": false, 00:26:32.127 "high_priority_weight": 0, 00:26:32.127 "io_path_stat": false, 00:26:32.127 "io_queue_requests": 0, 00:26:32.127 "keep_alive_timeout_ms": 10000, 00:26:32.127 "low_priority_weight": 0, 00:26:32.127 "medium_priority_weight": 0, 00:26:32.127 "nvme_adminq_poll_period_us": 10000, 00:26:32.127 "nvme_error_stat": false, 00:26:32.127 "nvme_ioq_poll_period_us": 0, 00:26:32.127 "rdma_cm_event_timeout_ms": 0, 00:26:32.127 "rdma_max_cq_size": 0, 00:26:32.127 "rdma_srq_size": 0, 00:26:32.127 "reconnect_delay_sec": 0, 00:26:32.127 "timeout_admin_us": 0, 00:26:32.127 "timeout_us": 0, 00:26:32.127 "transport_ack_timeout": 0, 00:26:32.127 "transport_retry_count": 4, 00:26:32.127 "transport_tos": 0 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "bdev_nvme_set_hotplug", 00:26:32.127 "params": { 00:26:32.127 "enable": false, 00:26:32.127 "period_us": 100000 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "bdev_malloc_create", 00:26:32.127 "params": { 00:26:32.127 "block_size": 4096, 00:26:32.127 "name": "malloc0", 00:26:32.127 "num_blocks": 8192, 00:26:32.127 "optimal_io_boundary": 0, 00:26:32.127 "physical_block_size": 4096, 00:26:32.127 "uuid": "beb43099-0bee-451f-be4a-f2d5ec55f485" 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "bdev_wait_for_examine" 00:26:32.127 } 00:26:32.127 ] 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "subsystem": "nbd", 00:26:32.127 "config": [] 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "subsystem": "scheduler", 00:26:32.127 "config": [ 00:26:32.127 { 00:26:32.127 "method": "framework_set_scheduler", 00:26:32.127 "params": { 00:26:32.127 "name": "static" 00:26:32.127 } 00:26:32.127 } 00:26:32.127 ] 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "subsystem": "nvmf", 00:26:32.127 "config": [ 00:26:32.127 { 00:26:32.127 "method": "nvmf_set_config", 00:26:32.127 "params": { 00:26:32.127 "admin_cmd_passthru": { 00:26:32.127 "identify_ctrlr": false 00:26:32.127 }, 00:26:32.127 "discovery_filter": "match_any" 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "nvmf_set_max_subsystems", 00:26:32.127 "params": { 00:26:32.127 "max_subsystems": 1024 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "nvmf_set_crdt", 00:26:32.127 "params": { 00:26:32.127 "crdt1": 0, 00:26:32.127 "crdt2": 0, 00:26:32.127 "crdt3": 0 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "nvmf_create_transport", 00:26:32.127 "params": { 00:26:32.127 "abort_timeout_sec": 1, 00:26:32.127 "ack_timeout": 0, 00:26:32.127 "buf_cache_size": 4294967295, 00:26:32.127 "c2h_success": false, 00:26:32.127 "data_wr_pool_size": 0, 00:26:32.127 "dif_insert_or_strip": false, 00:26:32.127 "in_capsule_data_size": 4096, 00:26:32.127 "io_unit_size": 131072, 00:26:32.127 "max_aq_depth": 128, 00:26:32.127 "max_io_qpairs_per_ctrlr": 127, 00:26:32.127 "max_io_size": 131072, 00:26:32.127 "max_queue_depth": 128, 00:26:32.127 "num_shared_buffers": 511, 00:26:32.127 "sock_priority": 0, 00:26:32.127 "trtype": "TCP", 00:26:32.127 "zcopy": false 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "nvmf_create_subsystem", 00:26:32.127 "params": { 00:26:32.127 "allow_any_host": false, 00:26:32.127 "ana_reporting": false, 00:26:32.127 "max_cntlid": 65519, 00:26:32.127 "max_namespaces": 10, 00:26:32.127 "min_cntlid": 1, 00:26:32.127 "model_number": "SPDK bdev Controller", 00:26:32.127 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.127 "serial_number": "SPDK00000000000001" 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "nvmf_subsystem_add_host", 00:26:32.127 "params": { 00:26:32.127 "host": "nqn.2016-06.io.spdk:host1", 00:26:32.127 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.127 "psk": "/tmp/tmp.TfUETzv55G" 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "nvmf_subsystem_add_ns", 00:26:32.127 "params": { 00:26:32.127 "namespace": { 00:26:32.127 "bdev_name": "malloc0", 00:26:32.127 "nguid": "BEB430990BEE451FBE4AF2D5EC55F485", 00:26:32.127 "no_auto_visible": false, 00:26:32.127 "nsid": 1, 00:26:32.127 "uuid": "beb43099-0bee-451f-be4a-f2d5ec55f485" 00:26:32.127 }, 00:26:32.127 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:26:32.127 } 00:26:32.127 }, 00:26:32.127 { 00:26:32.127 "method": "nvmf_subsystem_add_listener", 00:26:32.127 "params": { 00:26:32.127 "listen_address": { 00:26:32.127 "adrfam": "IPv4", 00:26:32.127 "traddr": "10.0.0.2", 00:26:32.127 "trsvcid": "4420", 00:26:32.127 "trtype": "TCP" 00:26:32.127 }, 00:26:32.127 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.127 "secure_channel": true 00:26:32.127 } 00:26:32.127 } 00:26:32.127 ] 00:26:32.127 } 00:26:32.127 ] 00:26:32.127 }' 00:26:32.127 21:29:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:32.127 21:29:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:32.127 21:29:21 -- common/autotest_common.sh@10 -- # set +x 00:26:32.127 21:29:21 -- nvmf/common.sh@470 -- # nvmfpid=94452 00:26:32.127 21:29:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:32.127 21:29:21 -- nvmf/common.sh@471 -- # waitforlisten 94452 00:26:32.127 21:29:21 -- common/autotest_common.sh@817 -- # '[' -z 94452 ']' 00:26:32.127 21:29:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.127 21:29:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:32.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.128 21:29:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.128 21:29:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:32.128 21:29:21 -- common/autotest_common.sh@10 -- # set +x 00:26:32.128 [2024-04-26 21:29:21.271977] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:32.128 [2024-04-26 21:29:21.272049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.387 [2024-04-26 21:29:21.412678] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.387 [2024-04-26 21:29:21.466185] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.387 [2024-04-26 21:29:21.466239] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.387 [2024-04-26 21:29:21.466247] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.387 [2024-04-26 21:29:21.466253] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.387 [2024-04-26 21:29:21.466258] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.387 [2024-04-26 21:29:21.466352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.646 [2024-04-26 21:29:21.662784] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.646 [2024-04-26 21:29:21.678679] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:32.646 [2024-04-26 21:29:21.694665] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:32.646 [2024-04-26 21:29:21.694865] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.214 21:29:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:33.214 21:29:22 -- common/autotest_common.sh@850 -- # return 0 00:26:33.214 21:29:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:33.214 21:29:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:33.214 21:29:22 -- common/autotest_common.sh@10 -- # set +x 00:26:33.214 21:29:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.214 21:29:22 -- target/tls.sh@207 -- # bdevperf_pid=94497 00:26:33.214 21:29:22 -- target/tls.sh@208 -- # waitforlisten 94497 /var/tmp/bdevperf.sock 00:26:33.214 21:29:22 -- common/autotest_common.sh@817 -- # '[' -z 94497 ']' 00:26:33.214 21:29:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:33.214 21:29:22 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:33.214 21:29:22 -- target/tls.sh@204 -- # echo '{ 00:26:33.214 "subsystems": [ 00:26:33.214 { 00:26:33.214 "subsystem": "keyring", 00:26:33.214 "config": [] 00:26:33.214 }, 00:26:33.214 { 00:26:33.214 "subsystem": "iobuf", 00:26:33.214 "config": [ 00:26:33.214 { 00:26:33.214 "method": "iobuf_set_options", 00:26:33.214 "params": { 00:26:33.214 "large_bufsize": 135168, 00:26:33.214 "large_pool_count": 1024, 00:26:33.214 "small_bufsize": 8192, 00:26:33.214 "small_pool_count": 8192 00:26:33.214 } 00:26:33.214 } 00:26:33.214 ] 00:26:33.214 }, 00:26:33.214 { 00:26:33.214 "subsystem": "sock", 00:26:33.214 "config": [ 00:26:33.214 { 00:26:33.214 "method": "sock_impl_set_options", 00:26:33.214 "params": { 00:26:33.214 "enable_ktls": false, 00:26:33.214 "enable_placement_id": 0, 00:26:33.214 "enable_quickack": false, 00:26:33.214 "enable_recv_pipe": true, 00:26:33.214 "enable_zerocopy_send_client": false, 00:26:33.214 "enable_zerocopy_send_server": true, 00:26:33.214 "impl_name": "posix", 00:26:33.214 "recv_buf_size": 2097152, 00:26:33.214 "send_buf_size": 2097152, 00:26:33.214 "tls_version": 0, 00:26:33.214 "zerocopy_threshold": 0 00:26:33.214 } 00:26:33.214 }, 00:26:33.214 { 00:26:33.214 "method": "sock_impl_set_options", 00:26:33.214 "params": { 00:26:33.214 "enable_ktls": false, 00:26:33.214 "enable_placement_id": 0, 00:26:33.214 "enable_quickack": false, 00:26:33.214 "enable_recv_pipe": true, 00:26:33.214 "enable_zerocopy_send_client": false, 00:26:33.214 "enable_zerocopy_send_server": true, 00:26:33.214 "impl_name": "ssl", 00:26:33.214 "recv_buf_size": 4096, 00:26:33.214 "send_buf_size": 4096, 00:26:33.214 "tls_version": 0, 00:26:33.214 "zerocopy_threshold": 0 00:26:33.214 } 00:26:33.214 } 00:26:33.214 ] 00:26:33.214 }, 00:26:33.214 { 00:26:33.214 "subsystem": "vmd", 00:26:33.214 "config": [] 00:26:33.214 }, 00:26:33.214 { 00:26:33.214 "subsystem": "accel", 00:26:33.214 "config": [ 00:26:33.214 { 00:26:33.214 "method": "accel_set_options", 00:26:33.214 "params": { 00:26:33.214 "buf_count": 2048, 00:26:33.214 "large_cache_size": 16, 00:26:33.214 "sequence_count": 2048, 00:26:33.214 "small_cache_size": 128, 00:26:33.214 "task_count": 2048 00:26:33.214 } 00:26:33.214 } 00:26:33.214 ] 00:26:33.214 }, 00:26:33.214 { 00:26:33.214 "subsystem": "bdev", 00:26:33.214 "config": [ 00:26:33.214 { 00:26:33.214 "method": "bdev_set_options", 00:26:33.214 "params": { 00:26:33.214 "bdev_auto_examine": true, 00:26:33.214 "bdev_io_cache_size": 256, 00:26:33.214 "bdev_io_pool_size": 65535, 00:26:33.214 "iobuf_large_cache_size": 16, 00:26:33.214 "iobuf_small_cache_size": 128 00:26:33.214 } 00:26:33.214 }, 00:26:33.214 { 00:26:33.214 "method": "bdev_raid_set_options", 00:26:33.214 "params": { 00:26:33.214 "process_window_size_kb": 1024 00:26:33.214 } 00:26:33.214 }, 00:26:33.214 { 00:26:33.214 "method": "bdev_iscsi_set_options", 00:26:33.214 "params": { 00:26:33.214 "timeout_sec": 30 00:26:33.214 } 00:26:33.214 }, 00:26:33.214 { 00:26:33.214 "method": "bdev_nvme_set_options", 00:26:33.214 "params": { 00:26:33.214 "action_on_timeout": "none", 00:26:33.214 "allow_accel_sequence": false, 00:26:33.214 "arbitration_burst": 0, 00:26:33.214 "bdev_retry_count": 3, 00:26:33.214 "ctrlr_loss_timeout_sec": 0, 00:26:33.214 "delay_cmd_submit": true, 00:26:33.214 "dhchap_dhgroups": [ 00:26:33.214 "null", 00:26:33.214 "ffdhe2048", 00:26:33.214 "ffdhe3072", 00:26:33.214 "ffdhe4096", 00:26:33.214 "ffdhe6144", 00:26:33.214 "ffdhe8192" 00:26:33.214 ], 00:26:33.214 "dhchap_digests": [ 00:26:33.214 "sha256", 00:26:33.214 "sha384", 00:26:33.214 "sha512" 00:26:33.214 ], 00:26:33.214 "disable_auto_failback": false, 00:26:33.214 "fast_io_fail_timeout_sec": 0, 00:26:33.215 "generate_uuids": false, 00:26:33.215 "high_priority_weight": 0, 00:26:33.215 "io_path_stat": false, 00:26:33.215 "io_queue_requests": 512, 00:26:33.215 "keep_alive_timeout_ms": 10000, 00:26:33.215 "low_priority_weight": 0, 00:26:33.215 "medium_priority_weight": 0, 00:26:33.215 "nvme_adminq_poll_period_us": 10000, 00:26:33.215 "nvme_error_stat": false, 00:26:33.215 "nvme_ioq_poll_period_us": 0, 00:26:33.215 "rdma_cm_event_timeout_ms": 0, 00:26:33.215 "rdma_max_cq_size": 0, 00:26:33.215 "rdma_srq_size": 0, 00:26:33.215 "reconnect_delay_sec": 0, 00:26:33.215 "timeout_admin_us": 0, 00:26:33.215 "timeout_us": 0, 00:26:33.215 "transport_ack_timeout": 0, 00:26:33.215 "transport_retry_count": 4, 00:26:33.215 "transport_tos": 0 00:26:33.215 } 00:26:33.215 }, 00:26:33.215 { 00:26:33.215 "method": "bdev_nvme_attach_controller", 00:26:33.215 "params": { 00:26:33.215 "adrfam": "IPv4", 00:26:33.215 "ctrlr_loss_timeout_sec": 0, 00:26:33.215 "ddgst": false, 00:26:33.215 "fast_io_fail_timeout_sec": 0, 00:26:33.215 "hdgst": false, 00:26:33.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:33.215 "name": "TLSTEST", 00:26:33.215 "prchk_guard": false, 00:26:33.215 "prchk_reftag": false, 00:26:33.215 "psk": "/tmp/tmp.TfUETzv55G", 00:26:33.215 "reconnect_delay_sec": 0, 00:26:33.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:33.215 "traddr": "10.0.0.2", 00:26:33.215 "trsvcid": "4420", 00:26:33.215 "trtype": "TCP" 00:26:33.215 } 00:26:33.215 }, 00:26:33.215 { 00:26:33.215 "method": "bdev_nvme_set_hotplug", 00:26:33.215 "params": { 00:26:33.215 "enable": false, 00:26:33.215 "period_us": 100000 00:26:33.215 } 00:26:33.215 }, 00:26:33.215 { 00:26:33.215 "method": "bdev_wait_for_examine" 00:26:33.215 } 00:26:33.215 ] 00:26:33.215 }, 00:26:33.215 { 00:26:33.215 "subsystem": "nbd", 00:26:33.215 "config": [] 00:26:33.215 } 00:26:33.215 ] 00:26:33.215 }' 00:26:33.215 21:29:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:33.215 21:29:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:33.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:33.215 21:29:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:33.215 21:29:22 -- common/autotest_common.sh@10 -- # set +x 00:26:33.215 [2024-04-26 21:29:22.308629] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:33.215 [2024-04-26 21:29:22.309153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94497 ] 00:26:33.215 [2024-04-26 21:29:22.435235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.473 [2024-04-26 21:29:22.489138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.474 [2024-04-26 21:29:22.624459] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:33.474 [2024-04-26 21:29:22.624560] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:34.042 21:29:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:34.042 21:29:23 -- common/autotest_common.sh@850 -- # return 0 00:26:34.042 21:29:23 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:34.300 Running I/O for 10 seconds... 00:26:44.291 00:26:44.291 Latency(us) 00:26:44.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.291 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:44.291 Verification LBA range: start 0x0 length 0x2000 00:26:44.291 TLSTESTn1 : 10.01 5207.66 20.34 0.00 0.00 24537.68 5122.68 19460.47 00:26:44.291 =================================================================================================================== 00:26:44.291 Total : 5207.66 20.34 0.00 0.00 24537.68 5122.68 19460.47 00:26:44.291 0 00:26:44.291 21:29:33 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:44.291 21:29:33 -- target/tls.sh@214 -- # killprocess 94497 00:26:44.291 21:29:33 -- common/autotest_common.sh@936 -- # '[' -z 94497 ']' 00:26:44.291 21:29:33 -- common/autotest_common.sh@940 -- # kill -0 94497 00:26:44.291 21:29:33 -- common/autotest_common.sh@941 -- # uname 00:26:44.291 21:29:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:44.291 21:29:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94497 00:26:44.291 killing process with pid 94497 00:26:44.291 21:29:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:44.291 21:29:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:44.291 21:29:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94497' 00:26:44.291 21:29:33 -- common/autotest_common.sh@955 -- # kill 94497 00:26:44.291 Received shutdown signal, test time was about 10.000000 seconds 00:26:44.291 00:26:44.291 Latency(us) 00:26:44.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.291 =================================================================================================================== 00:26:44.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.291 [2024-04-26 21:29:33.383269] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:44.291 21:29:33 -- common/autotest_common.sh@960 -- # wait 94497 00:26:44.550 21:29:33 -- target/tls.sh@215 -- # killprocess 94452 00:26:44.550 21:29:33 -- common/autotest_common.sh@936 -- # '[' -z 94452 ']' 00:26:44.550 21:29:33 -- common/autotest_common.sh@940 -- # kill -0 94452 00:26:44.550 21:29:33 -- common/autotest_common.sh@941 -- # uname 00:26:44.550 21:29:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:44.550 21:29:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94452 00:26:44.550 21:29:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:44.550 killing process with pid 94452 00:26:44.550 21:29:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:44.550 21:29:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94452' 00:26:44.550 21:29:33 -- common/autotest_common.sh@955 -- # kill 94452 00:26:44.550 [2024-04-26 21:29:33.610413] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:44.550 21:29:33 -- common/autotest_common.sh@960 -- # wait 94452 00:26:44.809 21:29:33 -- target/tls.sh@218 -- # nvmfappstart 00:26:44.809 21:29:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:44.809 21:29:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:44.809 21:29:33 -- common/autotest_common.sh@10 -- # set +x 00:26:44.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.809 21:29:33 -- nvmf/common.sh@470 -- # nvmfpid=94643 00:26:44.809 21:29:33 -- nvmf/common.sh@471 -- # waitforlisten 94643 00:26:44.809 21:29:33 -- common/autotest_common.sh@817 -- # '[' -z 94643 ']' 00:26:44.809 21:29:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.809 21:29:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:44.809 21:29:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.809 21:29:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:44.809 21:29:33 -- common/autotest_common.sh@10 -- # set +x 00:26:44.809 21:29:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:44.809 [2024-04-26 21:29:33.866584] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:44.809 [2024-04-26 21:29:33.866652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.809 [2024-04-26 21:29:34.004512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.809 [2024-04-26 21:29:34.052880] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.809 [2024-04-26 21:29:34.052930] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.809 [2024-04-26 21:29:34.052936] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.809 [2024-04-26 21:29:34.052941] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.809 [2024-04-26 21:29:34.052946] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.809 [2024-04-26 21:29:34.052965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.748 21:29:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:45.748 21:29:34 -- common/autotest_common.sh@850 -- # return 0 00:26:45.748 21:29:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:45.748 21:29:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:45.748 21:29:34 -- common/autotest_common.sh@10 -- # set +x 00:26:45.748 21:29:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.748 21:29:34 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.TfUETzv55G 00:26:45.748 21:29:34 -- target/tls.sh@49 -- # local key=/tmp/tmp.TfUETzv55G 00:26:45.748 21:29:34 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:45.748 [2024-04-26 21:29:34.987137] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.008 21:29:35 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:46.008 21:29:35 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:46.267 [2024-04-26 21:29:35.422474] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:46.267 [2024-04-26 21:29:35.422747] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.267 21:29:35 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:46.526 malloc0 00:26:46.526 21:29:35 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:46.786 21:29:35 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TfUETzv55G 00:26:47.045 [2024-04-26 21:29:36.050521] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:47.045 21:29:36 -- target/tls.sh@222 -- # bdevperf_pid=94746 00:26:47.045 21:29:36 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:47.045 21:29:36 -- target/tls.sh@225 -- # waitforlisten 94746 /var/tmp/bdevperf.sock 00:26:47.045 21:29:36 -- common/autotest_common.sh@817 -- # '[' -z 94746 ']' 00:26:47.045 21:29:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:47.045 21:29:36 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:47.045 21:29:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:47.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:47.045 21:29:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:47.045 21:29:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:47.045 21:29:36 -- common/autotest_common.sh@10 -- # set +x 00:26:47.045 [2024-04-26 21:29:36.128572] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:47.045 [2024-04-26 21:29:36.128639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94746 ] 00:26:47.045 [2024-04-26 21:29:36.253555] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.313 [2024-04-26 21:29:36.305904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.881 21:29:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:47.881 21:29:36 -- common/autotest_common.sh@850 -- # return 0 00:26:47.881 21:29:36 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TfUETzv55G 00:26:48.141 21:29:37 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:48.399 [2024-04-26 21:29:37.419479] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:48.399 nvme0n1 00:26:48.399 21:29:37 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:48.399 Running I/O for 1 seconds... 00:26:49.777 00:26:49.777 Latency(us) 00:26:49.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.777 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:49.777 Verification LBA range: start 0x0 length 0x2000 00:26:49.777 nvme0n1 : 1.01 5150.44 20.12 0.00 0.00 24652.37 4865.12 18888.10 00:26:49.777 =================================================================================================================== 00:26:49.777 Total : 5150.44 20.12 0.00 0.00 24652.37 4865.12 18888.10 00:26:49.777 0 00:26:49.777 21:29:38 -- target/tls.sh@234 -- # killprocess 94746 00:26:49.777 21:29:38 -- common/autotest_common.sh@936 -- # '[' -z 94746 ']' 00:26:49.777 21:29:38 -- common/autotest_common.sh@940 -- # kill -0 94746 00:26:49.777 21:29:38 -- common/autotest_common.sh@941 -- # uname 00:26:49.777 21:29:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:49.777 21:29:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94746 00:26:49.777 killing process with pid 94746 00:26:49.777 Received shutdown signal, test time was about 1.000000 seconds 00:26:49.777 00:26:49.777 Latency(us) 00:26:49.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.777 =================================================================================================================== 00:26:49.777 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.777 21:29:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:49.777 21:29:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:49.777 21:29:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94746' 00:26:49.777 21:29:38 -- common/autotest_common.sh@955 -- # kill 94746 00:26:49.777 21:29:38 -- common/autotest_common.sh@960 -- # wait 94746 00:26:49.777 21:29:38 -- target/tls.sh@235 -- # killprocess 94643 00:26:49.777 21:29:38 -- common/autotest_common.sh@936 -- # '[' -z 94643 ']' 00:26:49.777 21:29:38 -- common/autotest_common.sh@940 -- # kill -0 94643 00:26:49.777 21:29:38 -- common/autotest_common.sh@941 -- # uname 00:26:49.777 21:29:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:49.777 21:29:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94643 00:26:49.777 killing process with pid 94643 00:26:49.777 21:29:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:49.777 21:29:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:49.777 21:29:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94643' 00:26:49.777 21:29:38 -- common/autotest_common.sh@955 -- # kill 94643 00:26:49.777 [2024-04-26 21:29:38.900663] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:49.777 21:29:38 -- common/autotest_common.sh@960 -- # wait 94643 00:26:50.035 21:29:39 -- target/tls.sh@238 -- # nvmfappstart 00:26:50.035 21:29:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:50.035 21:29:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:50.035 21:29:39 -- common/autotest_common.sh@10 -- # set +x 00:26:50.035 21:29:39 -- nvmf/common.sh@470 -- # nvmfpid=94816 00:26:50.035 21:29:39 -- nvmf/common.sh@471 -- # waitforlisten 94816 00:26:50.036 21:29:39 -- common/autotest_common.sh@817 -- # '[' -z 94816 ']' 00:26:50.036 21:29:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.036 21:29:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:50.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.036 21:29:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.036 21:29:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:50.036 21:29:39 -- common/autotest_common.sh@10 -- # set +x 00:26:50.036 21:29:39 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:50.036 [2024-04-26 21:29:39.156229] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:50.036 [2024-04-26 21:29:39.156300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.295 [2024-04-26 21:29:39.299168] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.295 [2024-04-26 21:29:39.353009] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.295 [2024-04-26 21:29:39.353065] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.295 [2024-04-26 21:29:39.353072] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.295 [2024-04-26 21:29:39.353078] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.295 [2024-04-26 21:29:39.353082] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.295 [2024-04-26 21:29:39.353106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.864 21:29:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:50.864 21:29:40 -- common/autotest_common.sh@850 -- # return 0 00:26:50.864 21:29:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:50.864 21:29:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:50.864 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:26:50.864 21:29:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.864 21:29:40 -- target/tls.sh@239 -- # rpc_cmd 00:26:50.864 21:29:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.864 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:26:50.864 [2024-04-26 21:29:40.100841] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.122 malloc0 00:26:51.122 [2024-04-26 21:29:40.129709] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:51.122 [2024-04-26 21:29:40.129904] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.122 21:29:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.122 21:29:40 -- target/tls.sh@252 -- # bdevperf_pid=94866 00:26:51.122 21:29:40 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:51.122 21:29:40 -- target/tls.sh@254 -- # waitforlisten 94866 /var/tmp/bdevperf.sock 00:26:51.122 21:29:40 -- common/autotest_common.sh@817 -- # '[' -z 94866 ']' 00:26:51.122 21:29:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:51.122 21:29:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:51.122 21:29:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:51.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:51.122 21:29:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:51.122 21:29:40 -- common/autotest_common.sh@10 -- # set +x 00:26:51.122 [2024-04-26 21:29:40.211954] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:51.122 [2024-04-26 21:29:40.212027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94866 ] 00:26:51.122 [2024-04-26 21:29:40.336882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.380 [2024-04-26 21:29:40.389633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.948 21:29:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:51.948 21:29:41 -- common/autotest_common.sh@850 -- # return 0 00:26:51.948 21:29:41 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.TfUETzv55G 00:26:52.207 21:29:41 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:52.481 [2024-04-26 21:29:41.532551] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:52.481 nvme0n1 00:26:52.481 21:29:41 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:52.481 Running I/O for 1 seconds... 00:26:53.863 00:26:53.863 Latency(us) 00:26:53.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.863 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:53.863 Verification LBA range: start 0x0 length 0x2000 00:26:53.863 nvme0n1 : 1.01 4916.89 19.21 0.00 0.00 25806.90 5866.76 23009.15 00:26:53.863 =================================================================================================================== 00:26:53.863 Total : 4916.89 19.21 0.00 0.00 25806.90 5866.76 23009.15 00:26:53.863 0 00:26:53.863 21:29:42 -- target/tls.sh@263 -- # rpc_cmd save_config 00:26:53.863 21:29:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.863 21:29:42 -- common/autotest_common.sh@10 -- # set +x 00:26:53.863 21:29:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.863 21:29:42 -- target/tls.sh@263 -- # tgtcfg='{ 00:26:53.863 "subsystems": [ 00:26:53.863 { 00:26:53.863 "subsystem": "keyring", 00:26:53.863 "config": [ 00:26:53.863 { 00:26:53.863 "method": "keyring_file_add_key", 00:26:53.863 "params": { 00:26:53.863 "name": "key0", 00:26:53.863 "path": "/tmp/tmp.TfUETzv55G" 00:26:53.863 } 00:26:53.863 } 00:26:53.863 ] 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "subsystem": "iobuf", 00:26:53.863 "config": [ 00:26:53.863 { 00:26:53.863 "method": "iobuf_set_options", 00:26:53.863 "params": { 00:26:53.863 "large_bufsize": 135168, 00:26:53.863 "large_pool_count": 1024, 00:26:53.863 "small_bufsize": 8192, 00:26:53.863 "small_pool_count": 8192 00:26:53.863 } 00:26:53.863 } 00:26:53.863 ] 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "subsystem": "sock", 00:26:53.863 "config": [ 00:26:53.863 { 00:26:53.863 "method": "sock_impl_set_options", 00:26:53.863 "params": { 00:26:53.863 "enable_ktls": false, 00:26:53.863 "enable_placement_id": 0, 00:26:53.863 "enable_quickack": false, 00:26:53.863 "enable_recv_pipe": true, 00:26:53.863 "enable_zerocopy_send_client": false, 00:26:53.863 "enable_zerocopy_send_server": true, 00:26:53.863 "impl_name": "posix", 00:26:53.863 "recv_buf_size": 2097152, 00:26:53.863 "send_buf_size": 2097152, 00:26:53.863 "tls_version": 0, 00:26:53.863 "zerocopy_threshold": 0 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "sock_impl_set_options", 00:26:53.863 "params": { 00:26:53.863 "enable_ktls": false, 00:26:53.863 "enable_placement_id": 0, 00:26:53.863 "enable_quickack": false, 00:26:53.863 "enable_recv_pipe": true, 00:26:53.863 "enable_zerocopy_send_client": false, 00:26:53.863 "enable_zerocopy_send_server": true, 00:26:53.863 "impl_name": "ssl", 00:26:53.863 "recv_buf_size": 4096, 00:26:53.863 "send_buf_size": 4096, 00:26:53.863 "tls_version": 0, 00:26:53.863 "zerocopy_threshold": 0 00:26:53.863 } 00:26:53.863 } 00:26:53.863 ] 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "subsystem": "vmd", 00:26:53.863 "config": [] 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "subsystem": "accel", 00:26:53.863 "config": [ 00:26:53.863 { 00:26:53.863 "method": "accel_set_options", 00:26:53.863 "params": { 00:26:53.863 "buf_count": 2048, 00:26:53.863 "large_cache_size": 16, 00:26:53.863 "sequence_count": 2048, 00:26:53.863 "small_cache_size": 128, 00:26:53.863 "task_count": 2048 00:26:53.863 } 00:26:53.863 } 00:26:53.863 ] 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "subsystem": "bdev", 00:26:53.863 "config": [ 00:26:53.863 { 00:26:53.863 "method": "bdev_set_options", 00:26:53.863 "params": { 00:26:53.863 "bdev_auto_examine": true, 00:26:53.863 "bdev_io_cache_size": 256, 00:26:53.863 "bdev_io_pool_size": 65535, 00:26:53.863 "iobuf_large_cache_size": 16, 00:26:53.863 "iobuf_small_cache_size": 128 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "bdev_raid_set_options", 00:26:53.863 "params": { 00:26:53.863 "process_window_size_kb": 1024 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "bdev_iscsi_set_options", 00:26:53.863 "params": { 00:26:53.863 "timeout_sec": 30 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "bdev_nvme_set_options", 00:26:53.863 "params": { 00:26:53.863 "action_on_timeout": "none", 00:26:53.863 "allow_accel_sequence": false, 00:26:53.863 "arbitration_burst": 0, 00:26:53.863 "bdev_retry_count": 3, 00:26:53.863 "ctrlr_loss_timeout_sec": 0, 00:26:53.863 "delay_cmd_submit": true, 00:26:53.863 "dhchap_dhgroups": [ 00:26:53.863 "null", 00:26:53.863 "ffdhe2048", 00:26:53.863 "ffdhe3072", 00:26:53.863 "ffdhe4096", 00:26:53.863 "ffdhe6144", 00:26:53.863 "ffdhe8192" 00:26:53.863 ], 00:26:53.863 "dhchap_digests": [ 00:26:53.863 "sha256", 00:26:53.863 "sha384", 00:26:53.863 "sha512" 00:26:53.863 ], 00:26:53.863 "disable_auto_failback": false, 00:26:53.863 "fast_io_fail_timeout_sec": 0, 00:26:53.863 "generate_uuids": false, 00:26:53.863 "high_priority_weight": 0, 00:26:53.863 "io_path_stat": false, 00:26:53.863 "io_queue_requests": 0, 00:26:53.863 "keep_alive_timeout_ms": 10000, 00:26:53.863 "low_priority_weight": 0, 00:26:53.863 "medium_priority_weight": 0, 00:26:53.863 "nvme_adminq_poll_period_us": 10000, 00:26:53.863 "nvme_error_stat": false, 00:26:53.863 "nvme_ioq_poll_period_us": 0, 00:26:53.863 "rdma_cm_event_timeout_ms": 0, 00:26:53.863 "rdma_max_cq_size": 0, 00:26:53.863 "rdma_srq_size": 0, 00:26:53.863 "reconnect_delay_sec": 0, 00:26:53.863 "timeout_admin_us": 0, 00:26:53.863 "timeout_us": 0, 00:26:53.863 "transport_ack_timeout": 0, 00:26:53.863 "transport_retry_count": 4, 00:26:53.863 "transport_tos": 0 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "bdev_nvme_set_hotplug", 00:26:53.863 "params": { 00:26:53.863 "enable": false, 00:26:53.863 "period_us": 100000 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "bdev_malloc_create", 00:26:53.863 "params": { 00:26:53.863 "block_size": 4096, 00:26:53.863 "name": "malloc0", 00:26:53.863 "num_blocks": 8192, 00:26:53.863 "optimal_io_boundary": 0, 00:26:53.863 "physical_block_size": 4096, 00:26:53.863 "uuid": "e0665a9d-a930-4c18-8a4d-5ea3e74b6154" 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "bdev_wait_for_examine" 00:26:53.863 } 00:26:53.863 ] 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "subsystem": "nbd", 00:26:53.863 "config": [] 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "subsystem": "scheduler", 00:26:53.863 "config": [ 00:26:53.863 { 00:26:53.863 "method": "framework_set_scheduler", 00:26:53.863 "params": { 00:26:53.863 "name": "static" 00:26:53.863 } 00:26:53.863 } 00:26:53.863 ] 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "subsystem": "nvmf", 00:26:53.863 "config": [ 00:26:53.863 { 00:26:53.863 "method": "nvmf_set_config", 00:26:53.863 "params": { 00:26:53.863 "admin_cmd_passthru": { 00:26:53.863 "identify_ctrlr": false 00:26:53.863 }, 00:26:53.863 "discovery_filter": "match_any" 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "nvmf_set_max_subsystems", 00:26:53.863 "params": { 00:26:53.863 "max_subsystems": 1024 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "nvmf_set_crdt", 00:26:53.863 "params": { 00:26:53.863 "crdt1": 0, 00:26:53.863 "crdt2": 0, 00:26:53.863 "crdt3": 0 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "nvmf_create_transport", 00:26:53.863 "params": { 00:26:53.863 "abort_timeout_sec": 1, 00:26:53.863 "ack_timeout": 0, 00:26:53.863 "buf_cache_size": 4294967295, 00:26:53.863 "c2h_success": false, 00:26:53.863 "data_wr_pool_size": 0, 00:26:53.863 "dif_insert_or_strip": false, 00:26:53.863 "in_capsule_data_size": 4096, 00:26:53.863 "io_unit_size": 131072, 00:26:53.863 "max_aq_depth": 128, 00:26:53.863 "max_io_qpairs_per_ctrlr": 127, 00:26:53.863 "max_io_size": 131072, 00:26:53.863 "max_queue_depth": 128, 00:26:53.863 "num_shared_buffers": 511, 00:26:53.863 "sock_priority": 0, 00:26:53.863 "trtype": "TCP", 00:26:53.863 "zcopy": false 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "nvmf_create_subsystem", 00:26:53.863 "params": { 00:26:53.863 "allow_any_host": false, 00:26:53.863 "ana_reporting": false, 00:26:53.863 "max_cntlid": 65519, 00:26:53.863 "max_namespaces": 32, 00:26:53.863 "min_cntlid": 1, 00:26:53.863 "model_number": "SPDK bdev Controller", 00:26:53.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.863 "serial_number": "00000000000000000000" 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "nvmf_subsystem_add_host", 00:26:53.863 "params": { 00:26:53.863 "host": "nqn.2016-06.io.spdk:host1", 00:26:53.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.863 "psk": "key0" 00:26:53.863 } 00:26:53.863 }, 00:26:53.863 { 00:26:53.863 "method": "nvmf_subsystem_add_ns", 00:26:53.864 "params": { 00:26:53.864 "namespace": { 00:26:53.864 "bdev_name": "malloc0", 00:26:53.864 "nguid": "E0665A9DA9304C188A4D5EA3E74B6154", 00:26:53.864 "no_auto_visible": false, 00:26:53.864 "nsid": 1, 00:26:53.864 "uuid": "e0665a9d-a930-4c18-8a4d-5ea3e74b6154" 00:26:53.864 }, 00:26:53.864 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:26:53.864 } 00:26:53.864 }, 00:26:53.864 { 00:26:53.864 "method": "nvmf_subsystem_add_listener", 00:26:53.864 "params": { 00:26:53.864 "listen_address": { 00:26:53.864 "adrfam": "IPv4", 00:26:53.864 "traddr": "10.0.0.2", 00:26:53.864 "trsvcid": "4420", 00:26:53.864 "trtype": "TCP" 00:26:53.864 }, 00:26:53.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:53.864 "secure_channel": true 00:26:53.864 } 00:26:53.864 } 00:26:53.864 ] 00:26:53.864 } 00:26:53.864 ] 00:26:53.864 }' 00:26:53.864 21:29:42 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:54.123 21:29:43 -- target/tls.sh@264 -- # bperfcfg='{ 00:26:54.123 "subsystems": [ 00:26:54.123 { 00:26:54.123 "subsystem": "keyring", 00:26:54.123 "config": [ 00:26:54.123 { 00:26:54.123 "method": "keyring_file_add_key", 00:26:54.123 "params": { 00:26:54.123 "name": "key0", 00:26:54.123 "path": "/tmp/tmp.TfUETzv55G" 00:26:54.123 } 00:26:54.123 } 00:26:54.123 ] 00:26:54.123 }, 00:26:54.123 { 00:26:54.123 "subsystem": "iobuf", 00:26:54.123 "config": [ 00:26:54.123 { 00:26:54.123 "method": "iobuf_set_options", 00:26:54.123 "params": { 00:26:54.123 "large_bufsize": 135168, 00:26:54.123 "large_pool_count": 1024, 00:26:54.123 "small_bufsize": 8192, 00:26:54.123 "small_pool_count": 8192 00:26:54.123 } 00:26:54.123 } 00:26:54.123 ] 00:26:54.123 }, 00:26:54.123 { 00:26:54.123 "subsystem": "sock", 00:26:54.123 "config": [ 00:26:54.123 { 00:26:54.123 "method": "sock_impl_set_options", 00:26:54.123 "params": { 00:26:54.123 "enable_ktls": false, 00:26:54.123 "enable_placement_id": 0, 00:26:54.123 "enable_quickack": false, 00:26:54.123 "enable_recv_pipe": true, 00:26:54.123 "enable_zerocopy_send_client": false, 00:26:54.123 "enable_zerocopy_send_server": true, 00:26:54.123 "impl_name": "posix", 00:26:54.123 "recv_buf_size": 2097152, 00:26:54.123 "send_buf_size": 2097152, 00:26:54.123 "tls_version": 0, 00:26:54.123 "zerocopy_threshold": 0 00:26:54.123 } 00:26:54.123 }, 00:26:54.123 { 00:26:54.123 "method": "sock_impl_set_options", 00:26:54.123 "params": { 00:26:54.123 "enable_ktls": false, 00:26:54.123 "enable_placement_id": 0, 00:26:54.123 "enable_quickack": false, 00:26:54.123 "enable_recv_pipe": true, 00:26:54.123 "enable_zerocopy_send_client": false, 00:26:54.123 "enable_zerocopy_send_server": true, 00:26:54.123 "impl_name": "ssl", 00:26:54.123 "recv_buf_size": 4096, 00:26:54.123 "send_buf_size": 4096, 00:26:54.123 "tls_version": 0, 00:26:54.123 "zerocopy_threshold": 0 00:26:54.123 } 00:26:54.123 } 00:26:54.123 ] 00:26:54.123 }, 00:26:54.123 { 00:26:54.123 "subsystem": "vmd", 00:26:54.123 "config": [] 00:26:54.123 }, 00:26:54.123 { 00:26:54.123 "subsystem": "accel", 00:26:54.123 "config": [ 00:26:54.123 { 00:26:54.123 "method": "accel_set_options", 00:26:54.123 "params": { 00:26:54.123 "buf_count": 2048, 00:26:54.123 "large_cache_size": 16, 00:26:54.123 "sequence_count": 2048, 00:26:54.123 "small_cache_size": 128, 00:26:54.123 "task_count": 2048 00:26:54.123 } 00:26:54.123 } 00:26:54.123 ] 00:26:54.123 }, 00:26:54.123 { 00:26:54.123 "subsystem": "bdev", 00:26:54.123 "config": [ 00:26:54.123 { 00:26:54.123 "method": "bdev_set_options", 00:26:54.123 "params": { 00:26:54.123 "bdev_auto_examine": true, 00:26:54.123 "bdev_io_cache_size": 256, 00:26:54.123 "bdev_io_pool_size": 65535, 00:26:54.123 "iobuf_large_cache_size": 16, 00:26:54.123 "iobuf_small_cache_size": 128 00:26:54.123 } 00:26:54.123 }, 00:26:54.123 { 00:26:54.123 "method": "bdev_raid_set_options", 00:26:54.123 "params": { 00:26:54.123 "process_window_size_kb": 1024 00:26:54.123 } 00:26:54.123 }, 00:26:54.123 { 00:26:54.123 "method": "bdev_iscsi_set_options", 00:26:54.123 "params": { 00:26:54.123 "timeout_sec": 30 00:26:54.123 } 00:26:54.123 }, 00:26:54.123 { 00:26:54.123 "method": "bdev_nvme_set_options", 00:26:54.123 "params": { 00:26:54.123 "action_on_timeout": "none", 00:26:54.123 "allow_accel_sequence": false, 00:26:54.123 "arbitration_burst": 0, 00:26:54.123 "bdev_retry_count": 3, 00:26:54.123 "ctrlr_loss_timeout_sec": 0, 00:26:54.123 "delay_cmd_submit": true, 00:26:54.123 "dhchap_dhgroups": [ 00:26:54.123 "null", 00:26:54.123 "ffdhe2048", 00:26:54.123 "ffdhe3072", 00:26:54.123 "ffdhe4096", 00:26:54.123 "ffdhe6144", 00:26:54.123 "ffdhe8192" 00:26:54.123 ], 00:26:54.123 "dhchap_digests": [ 00:26:54.123 "sha256", 00:26:54.123 "sha384", 00:26:54.123 "sha512" 00:26:54.123 ], 00:26:54.123 "disable_auto_failback": false, 00:26:54.123 "fast_io_fail_timeout_sec": 0, 00:26:54.124 "generate_uuids": false, 00:26:54.124 "high_priority_weight": 0, 00:26:54.124 "io_path_stat": false, 00:26:54.124 "io_queue_requests": 512, 00:26:54.124 "keep_alive_timeout_ms": 10000, 00:26:54.124 "low_priority_weight": 0, 00:26:54.124 "medium_priority_weight": 0, 00:26:54.124 "nvme_adminq_poll_period_us": 10000, 00:26:54.124 "nvme_error_stat": false, 00:26:54.124 "nvme_ioq_poll_period_us": 0, 00:26:54.124 "rdma_cm_event_timeout_ms": 0, 00:26:54.124 "rdma_max_cq_size": 0, 00:26:54.124 "rdma_srq_size": 0, 00:26:54.124 "reconnect_delay_sec": 0, 00:26:54.124 "timeout_admin_us": 0, 00:26:54.124 "timeout_us": 0, 00:26:54.124 "transport_ack_timeout": 0, 00:26:54.124 "transport_retry_count": 4, 00:26:54.124 "transport_tos": 0 00:26:54.124 } 00:26:54.124 }, 00:26:54.124 { 00:26:54.124 "method": "bdev_nvme_attach_controller", 00:26:54.124 "params": { 00:26:54.124 "adrfam": "IPv4", 00:26:54.124 "ctrlr_loss_timeout_sec": 0, 00:26:54.124 "ddgst": false, 00:26:54.124 "fast_io_fail_timeout_sec": 0, 00:26:54.124 "hdgst": false, 00:26:54.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:54.124 "name": "nvme0", 00:26:54.124 "prchk_guard": false, 00:26:54.124 "prchk_reftag": false, 00:26:54.124 "psk": "key0", 00:26:54.124 "reconnect_delay_sec": 0, 00:26:54.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.124 "traddr": "10.0.0.2", 00:26:54.124 "trsvcid": "4420", 00:26:54.124 "trtype": "TCP" 00:26:54.124 } 00:26:54.124 }, 00:26:54.124 { 00:26:54.124 "method": "bdev_nvme_set_hotplug", 00:26:54.124 "params": { 00:26:54.124 "enable": false, 00:26:54.124 "period_us": 100000 00:26:54.124 } 00:26:54.124 }, 00:26:54.124 { 00:26:54.124 "method": "bdev_enable_histogram", 00:26:54.124 "params": { 00:26:54.124 "enable": true, 00:26:54.124 "name": "nvme0n1" 00:26:54.124 } 00:26:54.124 }, 00:26:54.124 { 00:26:54.124 "method": "bdev_wait_for_examine" 00:26:54.124 } 00:26:54.124 ] 00:26:54.124 }, 00:26:54.124 { 00:26:54.124 "subsystem": "nbd", 00:26:54.124 "config": [] 00:26:54.124 } 00:26:54.124 ] 00:26:54.124 }' 00:26:54.124 21:29:43 -- target/tls.sh@266 -- # killprocess 94866 00:26:54.124 21:29:43 -- common/autotest_common.sh@936 -- # '[' -z 94866 ']' 00:26:54.124 21:29:43 -- common/autotest_common.sh@940 -- # kill -0 94866 00:26:54.124 21:29:43 -- common/autotest_common.sh@941 -- # uname 00:26:54.124 21:29:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:54.124 21:29:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94866 00:26:54.124 21:29:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:54.124 21:29:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:54.124 killing process with pid 94866 00:26:54.124 21:29:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94866' 00:26:54.124 21:29:43 -- common/autotest_common.sh@955 -- # kill 94866 00:26:54.124 Received shutdown signal, test time was about 1.000000 seconds 00:26:54.124 00:26:54.124 Latency(us) 00:26:54.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:54.124 =================================================================================================================== 00:26:54.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:54.124 21:29:43 -- common/autotest_common.sh@960 -- # wait 94866 00:26:54.384 21:29:43 -- target/tls.sh@267 -- # killprocess 94816 00:26:54.384 21:29:43 -- common/autotest_common.sh@936 -- # '[' -z 94816 ']' 00:26:54.384 21:29:43 -- common/autotest_common.sh@940 -- # kill -0 94816 00:26:54.384 21:29:43 -- common/autotest_common.sh@941 -- # uname 00:26:54.384 21:29:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:54.384 21:29:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94816 00:26:54.384 killing process with pid 94816 00:26:54.384 21:29:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:54.384 21:29:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:54.384 21:29:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94816' 00:26:54.384 21:29:43 -- common/autotest_common.sh@955 -- # kill 94816 00:26:54.384 21:29:43 -- common/autotest_common.sh@960 -- # wait 94816 00:26:54.384 21:29:43 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:26:54.384 21:29:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:54.384 21:29:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:54.384 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:26:54.384 21:29:43 -- target/tls.sh@269 -- # echo '{ 00:26:54.384 "subsystems": [ 00:26:54.384 { 00:26:54.384 "subsystem": "keyring", 00:26:54.384 "config": [ 00:26:54.384 { 00:26:54.384 "method": "keyring_file_add_key", 00:26:54.384 "params": { 00:26:54.384 "name": "key0", 00:26:54.384 "path": "/tmp/tmp.TfUETzv55G" 00:26:54.384 } 00:26:54.384 } 00:26:54.384 ] 00:26:54.384 }, 00:26:54.384 { 00:26:54.384 "subsystem": "iobuf", 00:26:54.384 "config": [ 00:26:54.384 { 00:26:54.384 "method": "iobuf_set_options", 00:26:54.384 "params": { 00:26:54.384 "large_bufsize": 135168, 00:26:54.385 "large_pool_count": 1024, 00:26:54.385 "small_bufsize": 8192, 00:26:54.385 "small_pool_count": 8192 00:26:54.385 } 00:26:54.385 } 00:26:54.385 ] 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "subsystem": "sock", 00:26:54.385 "config": [ 00:26:54.385 { 00:26:54.385 "method": "sock_impl_set_options", 00:26:54.385 "params": { 00:26:54.385 "enable_ktls": false, 00:26:54.385 "enable_placement_id": 0, 00:26:54.385 "enable_quickack": false, 00:26:54.385 "enable_recv_pipe": true, 00:26:54.385 "enable_zerocopy_send_client": false, 00:26:54.385 "enable_zerocopy_send_server": true, 00:26:54.385 "impl_name": "posix", 00:26:54.385 "recv_buf_size": 2097152, 00:26:54.385 "send_buf_size": 2097152, 00:26:54.385 "tls_version": 0, 00:26:54.385 "zerocopy_threshold": 0 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "sock_impl_set_options", 00:26:54.385 "params": { 00:26:54.385 "enable_ktls": false, 00:26:54.385 "enable_placement_id": 0, 00:26:54.385 "enable_quickack": false, 00:26:54.385 "enable_recv_pipe": true, 00:26:54.385 "enable_zerocopy_send_client": false, 00:26:54.385 "enable_zerocopy_send_server": true, 00:26:54.385 "impl_name": "ssl", 00:26:54.385 "recv_buf_size": 4096, 00:26:54.385 "send_buf_size": 4096, 00:26:54.385 "tls_version": 0, 00:26:54.385 "zerocopy_threshold": 0 00:26:54.385 } 00:26:54.385 } 00:26:54.385 ] 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "subsystem": "vmd", 00:26:54.385 "config": [] 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "subsystem": "accel", 00:26:54.385 "config": [ 00:26:54.385 { 00:26:54.385 "method": "accel_set_options", 00:26:54.385 "params": { 00:26:54.385 "buf_count": 2048, 00:26:54.385 "large_cache_size": 16, 00:26:54.385 "sequence_count": 2048, 00:26:54.385 "small_cache_size": 128, 00:26:54.385 "task_count": 2048 00:26:54.385 } 00:26:54.385 } 00:26:54.385 ] 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "subsystem": "bdev", 00:26:54.385 "config": [ 00:26:54.385 { 00:26:54.385 "method": "bdev_set_options", 00:26:54.385 "params": { 00:26:54.385 "bdev_auto_examine": true, 00:26:54.385 "bdev_io_cache_size": 256, 00:26:54.385 "bdev_io_pool_size": 65535, 00:26:54.385 "iobuf_large_cache_size": 16, 00:26:54.385 "iobuf_small_cache_size": 128 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "bdev_raid_set_options", 00:26:54.385 "params": { 00:26:54.385 "process_window_size_kb": 1024 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "bdev_iscsi_set_options", 00:26:54.385 "params": { 00:26:54.385 "timeout_sec": 30 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "bdev_nvme_set_options", 00:26:54.385 "params": { 00:26:54.385 "action_on_timeout": "none", 00:26:54.385 "allow_accel_sequence": false, 00:26:54.385 "arbitration_burst": 0, 00:26:54.385 "bdev_retry_count": 3, 00:26:54.385 "ctrlr_loss_timeout_sec": 0, 00:26:54.385 "delay_cmd_submit": true, 00:26:54.385 "dhchap_dhgroups": [ 00:26:54.385 "null", 00:26:54.385 "ffdhe2048", 00:26:54.385 "ffdhe3072", 00:26:54.385 "ffdhe4096", 00:26:54.385 "ffdhe6144", 00:26:54.385 "ffdhe8192" 00:26:54.385 ], 00:26:54.385 "dhchap_digests": [ 00:26:54.385 "sha256", 00:26:54.385 "sha384", 00:26:54.385 "sha512" 00:26:54.385 ], 00:26:54.385 "disable_auto_failback": false, 00:26:54.385 "fast_io_fail_timeout_sec": 0, 00:26:54.385 "generate_uuids": false, 00:26:54.385 "high_priority_weight": 0, 00:26:54.385 "io_path_stat": false, 00:26:54.385 "io_queue_requests": 0, 00:26:54.385 "keep_alive_timeout_ms": 10000, 00:26:54.385 "low_priority_weight": 0, 00:26:54.385 "medium_priority_weight": 0, 00:26:54.385 "nvme_adminq_poll_period_us": 10000, 00:26:54.385 "nvme_error_stat": false, 00:26:54.385 "nvme_ioq_poll_period_us": 0, 00:26:54.385 "rdma_cm_event_timeout_ms": 0, 00:26:54.385 "rdma_max_cq_size": 0, 00:26:54.385 "rdma_srq_size": 0, 00:26:54.385 "reconnect_delay_sec": 0, 00:26:54.385 "timeout_admin_us": 0, 00:26:54.385 "timeout_us": 0, 00:26:54.385 "transport_ack_timeout": 0, 00:26:54.385 "transport_retry_count": 4, 00:26:54.385 "transport_tos": 0 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "bdev_nvme_set_hotplug", 00:26:54.385 "params": { 00:26:54.385 "enable": false, 00:26:54.385 "period_us": 100000 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "bdev_malloc_create", 00:26:54.385 "params": { 00:26:54.385 "block_size": 4096, 00:26:54.385 "name": "malloc0", 00:26:54.385 "num_blocks": 8192, 00:26:54.385 "optimal_io_boundary": 0, 00:26:54.385 "physical_block_size": 4096, 00:26:54.385 "uuid": "e0665a9d-a930-4c18-8a4d-5ea3e74b6154" 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "bdev_wait_for_examine" 00:26:54.385 } 00:26:54.385 ] 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "subsystem": "nbd", 00:26:54.385 "config": [] 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "subsystem": "scheduler", 00:26:54.385 "config": [ 00:26:54.385 { 00:26:54.385 "method": "framework_set_scheduler", 00:26:54.385 "params": { 00:26:54.385 "name": "static" 00:26:54.385 } 00:26:54.385 } 00:26:54.385 ] 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "subsystem": "nvmf", 00:26:54.385 "config": [ 00:26:54.385 { 00:26:54.385 "method": "nvmf_set_config", 00:26:54.385 "params": { 00:26:54.385 "admin_cmd_passthru": { 00:26:54.385 "identify_ctrlr": false 00:26:54.385 }, 00:26:54.385 "discovery_filter": "match_any" 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "nvmf_set_max_subsystems", 00:26:54.385 "params": { 00:26:54.385 "max_subsystems": 1024 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "nvmf_set_crdt", 00:26:54.385 "params": { 00:26:54.385 "crdt1": 0, 00:26:54.385 "crdt2": 0, 00:26:54.385 "crdt3": 0 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "nvmf_create_transport", 00:26:54.385 "params": { 00:26:54.385 "abort_timeout_sec": 1, 00:26:54.385 "ack_timeout": 0, 00:26:54.385 "buf_cache_size": 4294967295, 00:26:54.385 "c2h_success": false, 00:26:54.385 "data_wr_pool_size": 0, 00:26:54.385 "dif_insert_or_strip": false, 00:26:54.385 "in_capsule_data_size": 4096, 00:26:54.385 "io_unit_size": 131072, 00:26:54.385 "max_aq_depth": 128, 00:26:54.385 "max_io_qpairs_per_ctrlr": 127, 00:26:54.385 "max_io_size": 131072, 00:26:54.385 "max_queue_depth": 128, 00:26:54.385 "num_shared_buffers": 511, 00:26:54.385 "sock_priority": 0, 00:26:54.385 "trtype": "TCP", 00:26:54.385 "zcopy": false 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "nvmf_create_subsystem", 00:26:54.385 "params": { 00:26:54.385 "allow_any_host": false, 00:26:54.385 "ana_reporting": false, 00:26:54.385 "max_cntlid": 65519, 00:26:54.385 "max_namespaces": 32, 00:26:54.385 "min_cntlid": 1, 00:26:54.385 "model_number": "SPDK bdev Controller", 00:26:54.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.385 "serial_number": "00000000000000000000" 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "nvmf_subsystem_add_host", 00:26:54.385 "params": { 00:26:54.385 "host": "nqn.2016-06.io.spdk:host1", 00:26:54.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.385 "psk": "key0" 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "nvmf_subsystem_add_ns", 00:26:54.385 "params": { 00:26:54.385 "namespace": { 00:26:54.385 "bdev_name": "malloc0", 00:26:54.385 "nguid": "E0665A9DA9304C188A4D5EA3E74B6154", 00:26:54.385 "no_auto_visible": false, 00:26:54.385 "nsid": 1, 00:26:54.385 "uuid": "e0665a9d-a930-4c18-8a4d-5ea3e74b6154" 00:26:54.385 }, 00:26:54.385 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:26:54.385 } 00:26:54.385 }, 00:26:54.385 { 00:26:54.385 "method": "nvmf_subsystem_add_listener", 00:26:54.385 "params": { 00:26:54.385 "listen_address": { 00:26:54.385 "adrfam": "IPv4", 00:26:54.385 "traddr": "10.0.0.2", 00:26:54.385 "trsvcid": "4420", 00:26:54.385 "trtype": "TCP" 00:26:54.385 }, 00:26:54.385 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:54.385 "secure_channel": true 00:26:54.385 } 00:26:54.385 } 00:26:54.385 ] 00:26:54.385 } 00:26:54.385 ] 00:26:54.385 }' 00:26:54.385 21:29:43 -- nvmf/common.sh@470 -- # nvmfpid=94951 00:26:54.385 21:29:43 -- nvmf/common.sh@471 -- # waitforlisten 94951 00:26:54.385 21:29:43 -- common/autotest_common.sh@817 -- # '[' -z 94951 ']' 00:26:54.385 21:29:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.385 21:29:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:54.385 21:29:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:26:54.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.386 21:29:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.386 21:29:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:54.386 21:29:43 -- common/autotest_common.sh@10 -- # set +x 00:26:54.645 [2024-04-26 21:29:43.685581] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:54.646 [2024-04-26 21:29:43.685658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.646 [2024-04-26 21:29:43.813543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.646 [2024-04-26 21:29:43.868887] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.646 [2024-04-26 21:29:43.868940] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.646 [2024-04-26 21:29:43.868948] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.646 [2024-04-26 21:29:43.868953] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.646 [2024-04-26 21:29:43.868959] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.646 [2024-04-26 21:29:43.869045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.905 [2024-04-26 21:29:44.073644] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.905 [2024-04-26 21:29:44.105505] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:54.905 [2024-04-26 21:29:44.105701] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.474 21:29:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:55.474 21:29:44 -- common/autotest_common.sh@850 -- # return 0 00:26:55.474 21:29:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:55.474 21:29:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:55.474 21:29:44 -- common/autotest_common.sh@10 -- # set +x 00:26:55.474 21:29:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.474 21:29:44 -- target/tls.sh@272 -- # bdevperf_pid=94995 00:26:55.474 21:29:44 -- target/tls.sh@273 -- # waitforlisten 94995 /var/tmp/bdevperf.sock 00:26:55.474 21:29:44 -- common/autotest_common.sh@817 -- # '[' -z 94995 ']' 00:26:55.474 21:29:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:55.474 21:29:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:55.474 21:29:44 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:26:55.474 21:29:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:55.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:55.474 21:29:44 -- target/tls.sh@270 -- # echo '{ 00:26:55.474 "subsystems": [ 00:26:55.474 { 00:26:55.474 "subsystem": "keyring", 00:26:55.474 "config": [ 00:26:55.474 { 00:26:55.474 "method": "keyring_file_add_key", 00:26:55.474 "params": { 00:26:55.474 "name": "key0", 00:26:55.474 "path": "/tmp/tmp.TfUETzv55G" 00:26:55.474 } 00:26:55.474 } 00:26:55.474 ] 00:26:55.474 }, 00:26:55.474 { 00:26:55.474 "subsystem": "iobuf", 00:26:55.474 "config": [ 00:26:55.474 { 00:26:55.474 "method": "iobuf_set_options", 00:26:55.474 "params": { 00:26:55.474 "large_bufsize": 135168, 00:26:55.474 "large_pool_count": 1024, 00:26:55.474 "small_bufsize": 8192, 00:26:55.474 "small_pool_count": 8192 00:26:55.474 } 00:26:55.474 } 00:26:55.474 ] 00:26:55.474 }, 00:26:55.474 { 00:26:55.475 "subsystem": "sock", 00:26:55.475 "config": [ 00:26:55.475 { 00:26:55.475 "method": "sock_impl_set_options", 00:26:55.475 "params": { 00:26:55.475 "enable_ktls": false, 00:26:55.475 "enable_placement_id": 0, 00:26:55.475 "enable_quickack": false, 00:26:55.475 "enable_recv_pipe": true, 00:26:55.475 "enable_zerocopy_send_client": false, 00:26:55.475 "enable_zerocopy_send_server": true, 00:26:55.475 "impl_name": "posix", 00:26:55.475 "recv_buf_size": 2097152, 00:26:55.475 "send_buf_size": 2097152, 00:26:55.475 "tls_version": 0, 00:26:55.475 "zerocopy_threshold": 0 00:26:55.475 } 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "method": "sock_impl_set_options", 00:26:55.475 "params": { 00:26:55.475 "enable_ktls": false, 00:26:55.475 "enable_placement_id": 0, 00:26:55.475 "enable_quickack": false, 00:26:55.475 "enable_recv_pipe": true, 00:26:55.475 "enable_zerocopy_send_client": false, 00:26:55.475 "enable_zerocopy_send_server": true, 00:26:55.475 "impl_name": "ssl", 00:26:55.475 "recv_buf_size": 4096, 00:26:55.475 "send_buf_size": 4096, 00:26:55.475 "tls_version": 0, 00:26:55.475 "zerocopy_threshold": 0 00:26:55.475 } 00:26:55.475 } 00:26:55.475 ] 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "subsystem": "vmd", 00:26:55.475 "config": [] 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "subsystem": "accel", 00:26:55.475 "config": [ 00:26:55.475 { 00:26:55.475 "method": "accel_set_options", 00:26:55.475 "params": { 00:26:55.475 "buf_count": 2048, 00:26:55.475 "large_cache_size": 16, 00:26:55.475 "sequence_count": 2048, 00:26:55.475 "small_cache_size": 128, 00:26:55.475 "task_count": 2048 00:26:55.475 } 00:26:55.475 } 00:26:55.475 ] 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "subsystem": "bdev", 00:26:55.475 "config": [ 00:26:55.475 { 00:26:55.475 "method": "bdev_set_options", 00:26:55.475 "params": { 00:26:55.475 "bdev_auto_examine": true, 00:26:55.475 "bdev_io_cache_size": 256, 00:26:55.475 "bdev_io_pool_size": 65535, 00:26:55.475 "iobuf_large_cache_size": 16, 00:26:55.475 "iobuf_small_cache_size": 128 00:26:55.475 } 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "method": "bdev_raid_set_options", 00:26:55.475 "params": { 00:26:55.475 "process_window_size_kb": 1024 00:26:55.475 } 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "method": "bdev_iscsi_set_options", 00:26:55.475 "params": { 00:26:55.475 "timeout_sec": 30 00:26:55.475 } 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "method": "bdev_nvme_set_options", 00:26:55.475 "params": { 00:26:55.475 "action_on_timeout": "none", 00:26:55.475 "allow_accel_sequence": false, 00:26:55.475 "arbitration_burst": 0, 00:26:55.475 "bdev_retry_count": 3, 00:26:55.475 "ctrlr_loss_timeout_sec": 0, 00:26:55.475 "delay_cmd_submit": true, 00:26:55.475 "dhchap_dhgroups": [ 00:26:55.475 "null", 00:26:55.475 "ffdhe2048", 00:26:55.475 "ffdhe3072", 00:26:55.475 "ffdhe4096", 00:26:55.475 "ffdhe6144", 00:26:55.475 "ffdhe8192" 00:26:55.475 ], 00:26:55.475 "dhchap_digests": [ 00:26:55.475 "sha256", 00:26:55.475 "sha384", 00:26:55.475 "sha512" 00:26:55.475 ], 00:26:55.475 "disable_auto_failback": false, 00:26:55.475 "fast_io_fail_timeout_sec": 0, 00:26:55.475 "generate_uuids": false, 00:26:55.475 "high_priority_weight": 0, 00:26:55.475 "io_path_stat": false, 00:26:55.475 "io_queue_requests": 512, 00:26:55.475 "keep_alive_timeout_ms": 10000, 00:26:55.475 "low_priority_weight": 0, 00:26:55.475 "medium_priority_weight": 0, 00:26:55.475 "nvme_adminq_poll_period_us": 10000, 00:26:55.475 "nvme_error_stat": false, 00:26:55.475 "nvme_ioq_poll_period_us": 0, 00:26:55.475 "rdma_cm_event_timeout_ms": 0, 00:26:55.475 "rdma_max_cq_size": 0, 00:26:55.475 "rdma_srq_size": 0, 00:26:55.475 "reconnect_delay_sec": 0, 00:26:55.475 "timeout_admin_us": 0, 00:26:55.475 "timeout_us": 0, 00:26:55.475 "transport_ack_timeout": 0 21:29:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:55.475 , 00:26:55.475 "transport_retry_count": 4, 00:26:55.475 "transport_tos": 0 00:26:55.475 } 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "method": "bdev_nvme_attach_controller", 00:26:55.475 "params": { 00:26:55.475 "adrfam": "IPv4", 00:26:55.475 "ctrlr_loss_timeout_sec": 0, 00:26:55.475 "ddgst": false, 00:26:55.475 "fast_io_fail_timeout_sec": 0, 00:26:55.475 "hdgst": false, 00:26:55.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:55.475 "name": "nvme0", 00:26:55.475 "prchk_guard": false, 00:26:55.475 "prchk_reftag": false, 00:26:55.475 "psk": "key0", 00:26:55.475 "reconnect_delay_sec": 0, 00:26:55.475 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:55.475 "traddr": "10.0.0.2", 00:26:55.475 "trsvcid": "4420", 00:26:55.475 "trtype": "TCP" 00:26:55.475 } 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "method": "bdev_nvme_set_hotplug", 00:26:55.475 "params": { 00:26:55.475 "enable": false, 00:26:55.475 "period_us": 100000 00:26:55.475 } 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "method": "bdev_enable_histogram", 00:26:55.475 "params": { 00:26:55.475 "enable": true, 00:26:55.475 "name": "nvme0n1" 00:26:55.475 } 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "method": "bdev_wait_for_examine" 00:26:55.475 } 00:26:55.475 ] 00:26:55.475 }, 00:26:55.475 { 00:26:55.475 "subsystem": "nbd", 00:26:55.475 "config": [] 00:26:55.475 } 00:26:55.475 ] 00:26:55.475 }' 00:26:55.475 21:29:44 -- common/autotest_common.sh@10 -- # set +x 00:26:55.475 [2024-04-26 21:29:44.693720] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:55.475 [2024-04-26 21:29:44.693787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94995 ] 00:26:55.735 [2024-04-26 21:29:44.826975] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.735 [2024-04-26 21:29:44.879824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.995 [2024-04-26 21:29:45.019875] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:56.565 21:29:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:56.565 21:29:45 -- common/autotest_common.sh@850 -- # return 0 00:26:56.565 21:29:45 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:56.565 21:29:45 -- target/tls.sh@275 -- # jq -r '.[].name' 00:26:56.824 21:29:45 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.824 21:29:45 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:56.824 Running I/O for 1 seconds... 00:26:57.797 00:26:57.797 Latency(us) 00:26:57.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.797 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:57.797 Verification LBA range: start 0x0 length 0x2000 00:26:57.797 nvme0n1 : 1.02 5530.97 21.61 0.00 0.00 22940.97 8242.08 20032.84 00:26:57.797 =================================================================================================================== 00:26:57.797 Total : 5530.97 21.61 0.00 0.00 22940.97 8242.08 20032.84 00:26:57.797 0 00:26:57.797 21:29:47 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:26:57.797 21:29:47 -- target/tls.sh@279 -- # cleanup 00:26:57.797 21:29:47 -- target/tls.sh@15 -- # process_shm --id 0 00:26:57.797 21:29:47 -- common/autotest_common.sh@794 -- # type=--id 00:26:57.797 21:29:47 -- common/autotest_common.sh@795 -- # id=0 00:26:57.797 21:29:47 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:26:57.797 21:29:47 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:57.797 21:29:47 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:26:57.797 21:29:47 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:26:57.797 21:29:47 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:26:57.797 21:29:47 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:57.797 nvmf_trace.0 00:26:58.057 21:29:47 -- common/autotest_common.sh@809 -- # return 0 00:26:58.057 21:29:47 -- target/tls.sh@16 -- # killprocess 94995 00:26:58.057 21:29:47 -- common/autotest_common.sh@936 -- # '[' -z 94995 ']' 00:26:58.057 21:29:47 -- common/autotest_common.sh@940 -- # kill -0 94995 00:26:58.057 21:29:47 -- common/autotest_common.sh@941 -- # uname 00:26:58.057 21:29:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:58.057 21:29:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94995 00:26:58.057 21:29:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:58.057 21:29:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:58.057 21:29:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94995' 00:26:58.057 killing process with pid 94995 00:26:58.057 21:29:47 -- common/autotest_common.sh@955 -- # kill 94995 00:26:58.057 Received shutdown signal, test time was about 1.000000 seconds 00:26:58.057 00:26:58.057 Latency(us) 00:26:58.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.057 =================================================================================================================== 00:26:58.057 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.057 21:29:47 -- common/autotest_common.sh@960 -- # wait 94995 00:26:58.317 21:29:47 -- target/tls.sh@17 -- # nvmftestfini 00:26:58.317 21:29:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:58.317 21:29:47 -- nvmf/common.sh@117 -- # sync 00:26:58.317 21:29:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:58.317 21:29:47 -- nvmf/common.sh@120 -- # set +e 00:26:58.317 21:29:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:58.317 21:29:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:58.317 rmmod nvme_tcp 00:26:58.317 rmmod nvme_fabrics 00:26:58.317 rmmod nvme_keyring 00:26:58.317 21:29:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:58.317 21:29:47 -- nvmf/common.sh@124 -- # set -e 00:26:58.317 21:29:47 -- nvmf/common.sh@125 -- # return 0 00:26:58.317 21:29:47 -- nvmf/common.sh@478 -- # '[' -n 94951 ']' 00:26:58.317 21:29:47 -- nvmf/common.sh@479 -- # killprocess 94951 00:26:58.317 21:29:47 -- common/autotest_common.sh@936 -- # '[' -z 94951 ']' 00:26:58.317 21:29:47 -- common/autotest_common.sh@940 -- # kill -0 94951 00:26:58.317 21:29:47 -- common/autotest_common.sh@941 -- # uname 00:26:58.317 21:29:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:58.317 21:29:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94951 00:26:58.317 21:29:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:58.317 21:29:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:58.317 killing process with pid 94951 00:26:58.317 21:29:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94951' 00:26:58.317 21:29:47 -- common/autotest_common.sh@955 -- # kill 94951 00:26:58.317 21:29:47 -- common/autotest_common.sh@960 -- # wait 94951 00:26:58.575 21:29:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:58.575 21:29:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:58.575 21:29:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:58.575 21:29:47 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:58.575 21:29:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:58.575 21:29:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.575 21:29:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:58.575 21:29:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.575 21:29:47 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:58.575 21:29:47 -- target/tls.sh@18 -- # rm -f /tmp/tmp.wbEnI9rum9 /tmp/tmp.BlL4fm0rlq /tmp/tmp.TfUETzv55G 00:26:58.575 00:26:58.575 real 1m22.630s 00:26:58.575 user 2m10.627s 00:26:58.575 sys 0m26.393s 00:26:58.575 21:29:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:58.575 21:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:58.575 ************************************ 00:26:58.575 END TEST nvmf_tls 00:26:58.575 ************************************ 00:26:58.575 21:29:47 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:58.575 21:29:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:58.575 21:29:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:58.575 21:29:47 -- common/autotest_common.sh@10 -- # set +x 00:26:58.835 ************************************ 00:26:58.835 START TEST nvmf_fips 00:26:58.835 ************************************ 00:26:58.835 21:29:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:26:58.835 * Looking for test storage... 00:26:58.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:26:58.835 21:29:47 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:58.835 21:29:47 -- nvmf/common.sh@7 -- # uname -s 00:26:58.835 21:29:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.835 21:29:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.835 21:29:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.835 21:29:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.835 21:29:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.835 21:29:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.835 21:29:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.835 21:29:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.835 21:29:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.835 21:29:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.835 21:29:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:26:58.835 21:29:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:26:58.835 21:29:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.835 21:29:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.835 21:29:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:58.835 21:29:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.835 21:29:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:58.835 21:29:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.835 21:29:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.835 21:29:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.835 21:29:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.835 21:29:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.835 21:29:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.835 21:29:48 -- paths/export.sh@5 -- # export PATH 00:26:58.835 21:29:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.835 21:29:48 -- nvmf/common.sh@47 -- # : 0 00:26:58.835 21:29:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:58.835 21:29:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:58.835 21:29:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.835 21:29:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.835 21:29:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.835 21:29:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:58.835 21:29:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:58.835 21:29:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:58.835 21:29:48 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:58.835 21:29:48 -- fips/fips.sh@89 -- # check_openssl_version 00:26:58.835 21:29:48 -- fips/fips.sh@83 -- # local target=3.0.0 00:26:58.835 21:29:48 -- fips/fips.sh@85 -- # openssl version 00:26:58.835 21:29:48 -- fips/fips.sh@85 -- # awk '{print $2}' 00:26:58.835 21:29:48 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:26:58.835 21:29:48 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:26:58.835 21:29:48 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:26:58.835 21:29:48 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:26:58.835 21:29:48 -- scripts/common.sh@333 -- # IFS=.-: 00:26:58.835 21:29:48 -- scripts/common.sh@333 -- # read -ra ver1 00:26:58.835 21:29:48 -- scripts/common.sh@334 -- # IFS=.-: 00:26:58.835 21:29:48 -- scripts/common.sh@334 -- # read -ra ver2 00:26:58.835 21:29:48 -- scripts/common.sh@335 -- # local 'op=>=' 00:26:58.835 21:29:48 -- scripts/common.sh@337 -- # ver1_l=3 00:26:58.835 21:29:48 -- scripts/common.sh@338 -- # ver2_l=3 00:26:58.835 21:29:48 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:26:58.835 21:29:48 -- scripts/common.sh@341 -- # case "$op" in 00:26:58.835 21:29:48 -- scripts/common.sh@345 -- # : 1 00:26:58.835 21:29:48 -- scripts/common.sh@361 -- # (( v = 0 )) 00:26:58.835 21:29:48 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.835 21:29:48 -- scripts/common.sh@362 -- # decimal 3 00:26:58.835 21:29:48 -- scripts/common.sh@350 -- # local d=3 00:26:58.835 21:29:48 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:58.835 21:29:48 -- scripts/common.sh@352 -- # echo 3 00:26:58.835 21:29:48 -- scripts/common.sh@362 -- # ver1[v]=3 00:26:58.835 21:29:48 -- scripts/common.sh@363 -- # decimal 3 00:26:58.835 21:29:48 -- scripts/common.sh@350 -- # local d=3 00:26:58.835 21:29:48 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:26:58.835 21:29:48 -- scripts/common.sh@352 -- # echo 3 00:26:58.835 21:29:48 -- scripts/common.sh@363 -- # ver2[v]=3 00:26:58.835 21:29:48 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:58.835 21:29:48 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:58.835 21:29:48 -- scripts/common.sh@361 -- # (( v++ )) 00:26:58.835 21:29:48 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.835 21:29:48 -- scripts/common.sh@362 -- # decimal 0 00:26:58.835 21:29:48 -- scripts/common.sh@350 -- # local d=0 00:26:58.835 21:29:48 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:58.835 21:29:48 -- scripts/common.sh@352 -- # echo 0 00:26:58.835 21:29:48 -- scripts/common.sh@362 -- # ver1[v]=0 00:26:58.835 21:29:48 -- scripts/common.sh@363 -- # decimal 0 00:26:58.835 21:29:48 -- scripts/common.sh@350 -- # local d=0 00:26:58.835 21:29:48 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:58.835 21:29:48 -- scripts/common.sh@352 -- # echo 0 00:26:58.835 21:29:48 -- scripts/common.sh@363 -- # ver2[v]=0 00:26:58.835 21:29:48 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:58.835 21:29:48 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:26:58.835 21:29:48 -- scripts/common.sh@361 -- # (( v++ )) 00:26:58.835 21:29:48 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.835 21:29:48 -- scripts/common.sh@362 -- # decimal 9 00:26:58.835 21:29:48 -- scripts/common.sh@350 -- # local d=9 00:26:58.835 21:29:48 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:26:58.835 21:29:48 -- scripts/common.sh@352 -- # echo 9 00:26:58.835 21:29:48 -- scripts/common.sh@362 -- # ver1[v]=9 00:26:59.094 21:29:48 -- scripts/common.sh@363 -- # decimal 0 00:26:59.094 21:29:48 -- scripts/common.sh@350 -- # local d=0 00:26:59.094 21:29:48 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:26:59.094 21:29:48 -- scripts/common.sh@352 -- # echo 0 00:26:59.094 21:29:48 -- scripts/common.sh@363 -- # ver2[v]=0 00:26:59.094 21:29:48 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:26:59.094 21:29:48 -- scripts/common.sh@364 -- # return 0 00:26:59.094 21:29:48 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:26:59.094 21:29:48 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:26:59.094 21:29:48 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:26:59.094 21:29:48 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:26:59.094 21:29:48 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:26:59.094 21:29:48 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:26:59.094 21:29:48 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:26:59.094 21:29:48 -- fips/fips.sh@113 -- # build_openssl_config 00:26:59.094 21:29:48 -- fips/fips.sh@37 -- # cat 00:26:59.094 21:29:48 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:26:59.094 21:29:48 -- fips/fips.sh@58 -- # cat - 00:26:59.094 21:29:48 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:26:59.094 21:29:48 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:26:59.094 21:29:48 -- fips/fips.sh@116 -- # mapfile -t providers 00:26:59.094 21:29:48 -- fips/fips.sh@116 -- # grep name 00:26:59.094 21:29:48 -- fips/fips.sh@116 -- # openssl list -providers 00:26:59.094 21:29:48 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:26:59.094 21:29:48 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:26:59.094 21:29:48 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:26:59.094 21:29:48 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:26:59.094 21:29:48 -- fips/fips.sh@127 -- # : 00:26:59.094 21:29:48 -- common/autotest_common.sh@638 -- # local es=0 00:26:59.094 21:29:48 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:26:59.094 21:29:48 -- common/autotest_common.sh@626 -- # local arg=openssl 00:26:59.094 21:29:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:59.094 21:29:48 -- common/autotest_common.sh@630 -- # type -t openssl 00:26:59.094 21:29:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:59.094 21:29:48 -- common/autotest_common.sh@632 -- # type -P openssl 00:26:59.094 21:29:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:59.094 21:29:48 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:26:59.094 21:29:48 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:26:59.094 21:29:48 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:26:59.094 Error setting digest 00:26:59.094 00C2442A8B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:26:59.094 00C2442A8B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:26:59.094 21:29:48 -- common/autotest_common.sh@641 -- # es=1 00:26:59.094 21:29:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:59.094 21:29:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:59.094 21:29:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:59.094 21:29:48 -- fips/fips.sh@130 -- # nvmftestinit 00:26:59.094 21:29:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:59.094 21:29:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.094 21:29:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:59.094 21:29:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:59.094 21:29:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:59.094 21:29:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.094 21:29:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.094 21:29:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.094 21:29:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:59.094 21:29:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:59.094 21:29:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:59.094 21:29:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:59.094 21:29:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:59.094 21:29:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:59.094 21:29:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.094 21:29:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.094 21:29:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:59.094 21:29:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:59.094 21:29:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:59.094 21:29:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:59.094 21:29:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:59.094 21:29:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.094 21:29:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:59.094 21:29:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:59.094 21:29:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:59.094 21:29:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:59.094 21:29:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:59.094 21:29:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:59.094 Cannot find device "nvmf_tgt_br" 00:26:59.094 21:29:48 -- nvmf/common.sh@155 -- # true 00:26:59.094 21:29:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:59.094 Cannot find device "nvmf_tgt_br2" 00:26:59.094 21:29:48 -- nvmf/common.sh@156 -- # true 00:26:59.094 21:29:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:59.094 21:29:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:59.094 Cannot find device "nvmf_tgt_br" 00:26:59.095 21:29:48 -- nvmf/common.sh@158 -- # true 00:26:59.095 21:29:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:59.095 Cannot find device "nvmf_tgt_br2" 00:26:59.095 21:29:48 -- nvmf/common.sh@159 -- # true 00:26:59.095 21:29:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:59.352 21:29:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:59.353 21:29:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:59.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:59.353 21:29:48 -- nvmf/common.sh@162 -- # true 00:26:59.353 21:29:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:59.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:59.353 21:29:48 -- nvmf/common.sh@163 -- # true 00:26:59.353 21:29:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:59.353 21:29:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:59.353 21:29:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:59.353 21:29:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:59.353 21:29:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:59.353 21:29:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:59.353 21:29:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:59.353 21:29:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:59.353 21:29:48 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:59.353 21:29:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:59.353 21:29:48 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:59.353 21:29:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:59.353 21:29:48 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:59.353 21:29:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:59.353 21:29:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:59.353 21:29:48 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:59.353 21:29:48 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:59.353 21:29:48 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:59.353 21:29:48 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:59.353 21:29:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:59.353 21:29:48 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:59.353 21:29:48 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:59.353 21:29:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:59.353 21:29:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:59.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:26:59.353 00:26:59.353 --- 10.0.0.2 ping statistics --- 00:26:59.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.353 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:26:59.353 21:29:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:59.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:59.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:26:59.353 00:26:59.353 --- 10.0.0.3 ping statistics --- 00:26:59.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.353 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:59.353 21:29:48 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:59.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:26:59.353 00:26:59.353 --- 10.0.0.1 ping statistics --- 00:26:59.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.353 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:26:59.353 21:29:48 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.353 21:29:48 -- nvmf/common.sh@422 -- # return 0 00:26:59.353 21:29:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:59.353 21:29:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.353 21:29:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:59.353 21:29:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:59.353 21:29:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.353 21:29:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:59.353 21:29:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:59.353 21:29:48 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:26:59.353 21:29:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:59.353 21:29:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:59.353 21:29:48 -- common/autotest_common.sh@10 -- # set +x 00:26:59.353 21:29:48 -- nvmf/common.sh@470 -- # nvmfpid=95281 00:26:59.353 21:29:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:59.611 21:29:48 -- nvmf/common.sh@471 -- # waitforlisten 95281 00:26:59.611 21:29:48 -- common/autotest_common.sh@817 -- # '[' -z 95281 ']' 00:26:59.611 21:29:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.611 21:29:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:59.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.611 21:29:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.611 21:29:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:59.611 21:29:48 -- common/autotest_common.sh@10 -- # set +x 00:26:59.611 [2024-04-26 21:29:48.679678] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:59.611 [2024-04-26 21:29:48.679764] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.611 [2024-04-26 21:29:48.806178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.869 [2024-04-26 21:29:48.878699] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.869 [2024-04-26 21:29:48.878773] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.869 [2024-04-26 21:29:48.878785] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.869 [2024-04-26 21:29:48.878795] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.869 [2024-04-26 21:29:48.878803] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.869 [2024-04-26 21:29:48.878837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.434 21:29:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:00.434 21:29:49 -- common/autotest_common.sh@850 -- # return 0 00:27:00.434 21:29:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:00.434 21:29:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:00.434 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:27:00.434 21:29:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.434 21:29:49 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:27:00.434 21:29:49 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:27:00.434 21:29:49 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:00.434 21:29:49 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:27:00.434 21:29:49 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:00.434 21:29:49 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:00.434 21:29:49 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:00.434 21:29:49 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:00.693 [2024-04-26 21:29:49.832168] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.693 [2024-04-26 21:29:49.852086] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:00.693 [2024-04-26 21:29:49.852294] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.693 [2024-04-26 21:29:49.881302] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:00.693 malloc0 00:27:00.693 21:29:49 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:00.693 21:29:49 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:00.693 21:29:49 -- fips/fips.sh@147 -- # bdevperf_pid=95339 00:27:00.693 21:29:49 -- fips/fips.sh@148 -- # waitforlisten 95339 /var/tmp/bdevperf.sock 00:27:00.693 21:29:49 -- common/autotest_common.sh@817 -- # '[' -z 95339 ']' 00:27:00.693 21:29:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:00.693 21:29:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:00.693 21:29:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:00.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:00.693 21:29:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:00.693 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:27:00.951 [2024-04-26 21:29:49.983051] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:00.951 [2024-04-26 21:29:49.983145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95339 ] 00:27:00.951 [2024-04-26 21:29:50.110442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.952 [2024-04-26 21:29:50.182319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.887 21:29:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:01.887 21:29:50 -- common/autotest_common.sh@850 -- # return 0 00:27:01.887 21:29:50 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:01.887 [2024-04-26 21:29:51.112472] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:01.887 [2024-04-26 21:29:51.112592] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:02.147 TLSTESTn1 00:27:02.147 21:29:51 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:02.147 Running I/O for 10 seconds... 00:27:12.175 00:27:12.175 Latency(us) 00:27:12.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.175 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:12.175 Verification LBA range: start 0x0 length 0x2000 00:27:12.175 TLSTESTn1 : 10.01 5258.13 20.54 0.00 0.00 24301.14 5466.10 20032.84 00:27:12.175 =================================================================================================================== 00:27:12.175 Total : 5258.13 20.54 0.00 0.00 24301.14 5466.10 20032.84 00:27:12.175 0 00:27:12.175 21:30:01 -- fips/fips.sh@1 -- # cleanup 00:27:12.175 21:30:01 -- fips/fips.sh@15 -- # process_shm --id 0 00:27:12.175 21:30:01 -- common/autotest_common.sh@794 -- # type=--id 00:27:12.175 21:30:01 -- common/autotest_common.sh@795 -- # id=0 00:27:12.175 21:30:01 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:27:12.175 21:30:01 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:12.175 21:30:01 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:27:12.175 21:30:01 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:27:12.175 21:30:01 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:27:12.175 21:30:01 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:12.175 nvmf_trace.0 00:27:12.175 21:30:01 -- common/autotest_common.sh@809 -- # return 0 00:27:12.175 21:30:01 -- fips/fips.sh@16 -- # killprocess 95339 00:27:12.175 21:30:01 -- common/autotest_common.sh@936 -- # '[' -z 95339 ']' 00:27:12.175 21:30:01 -- common/autotest_common.sh@940 -- # kill -0 95339 00:27:12.175 21:30:01 -- common/autotest_common.sh@941 -- # uname 00:27:12.175 21:30:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:12.434 21:30:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95339 00:27:12.434 21:30:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:27:12.434 21:30:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:27:12.434 killing process with pid 95339 00:27:12.434 21:30:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95339' 00:27:12.434 21:30:01 -- common/autotest_common.sh@955 -- # kill 95339 00:27:12.434 Received shutdown signal, test time was about 10.000000 seconds 00:27:12.434 00:27:12.434 Latency(us) 00:27:12.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.434 =================================================================================================================== 00:27:12.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.434 [2024-04-26 21:30:01.451474] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:12.434 21:30:01 -- common/autotest_common.sh@960 -- # wait 95339 00:27:12.434 21:30:01 -- fips/fips.sh@17 -- # nvmftestfini 00:27:12.434 21:30:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:12.434 21:30:01 -- nvmf/common.sh@117 -- # sync 00:27:12.694 21:30:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.694 21:30:01 -- nvmf/common.sh@120 -- # set +e 00:27:12.694 21:30:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.694 21:30:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.694 rmmod nvme_tcp 00:27:12.694 rmmod nvme_fabrics 00:27:12.694 rmmod nvme_keyring 00:27:12.694 21:30:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.694 21:30:01 -- nvmf/common.sh@124 -- # set -e 00:27:12.694 21:30:01 -- nvmf/common.sh@125 -- # return 0 00:27:12.694 21:30:01 -- nvmf/common.sh@478 -- # '[' -n 95281 ']' 00:27:12.694 21:30:01 -- nvmf/common.sh@479 -- # killprocess 95281 00:27:12.694 21:30:01 -- common/autotest_common.sh@936 -- # '[' -z 95281 ']' 00:27:12.694 21:30:01 -- common/autotest_common.sh@940 -- # kill -0 95281 00:27:12.694 21:30:01 -- common/autotest_common.sh@941 -- # uname 00:27:12.694 21:30:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:12.694 21:30:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95281 00:27:12.694 killing process with pid 95281 00:27:12.694 21:30:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:12.694 21:30:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:12.694 21:30:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95281' 00:27:12.694 21:30:01 -- common/autotest_common.sh@955 -- # kill 95281 00:27:12.694 [2024-04-26 21:30:01.819155] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:12.694 21:30:01 -- common/autotest_common.sh@960 -- # wait 95281 00:27:12.960 21:30:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:12.960 21:30:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:12.960 21:30:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:12.960 21:30:02 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.960 21:30:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.960 21:30:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.960 21:30:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.960 21:30:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.960 21:30:02 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:12.960 21:30:02 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:12.960 00:27:12.960 real 0m14.207s 00:27:12.960 user 0m19.518s 00:27:12.960 sys 0m5.481s 00:27:12.960 21:30:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:12.960 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:27:12.960 ************************************ 00:27:12.960 END TEST nvmf_fips 00:27:12.960 ************************************ 00:27:12.960 21:30:02 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:27:12.960 21:30:02 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:12.960 21:30:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:12.960 21:30:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:12.960 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:27:12.960 ************************************ 00:27:12.960 START TEST nvmf_fuzz 00:27:12.960 ************************************ 00:27:12.960 21:30:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:13.220 * Looking for test storage... 00:27:13.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:13.220 21:30:02 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:13.220 21:30:02 -- nvmf/common.sh@7 -- # uname -s 00:27:13.220 21:30:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.220 21:30:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.220 21:30:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.220 21:30:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.220 21:30:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.220 21:30:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.220 21:30:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.220 21:30:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.220 21:30:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.220 21:30:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.220 21:30:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:27:13.220 21:30:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:27:13.220 21:30:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.220 21:30:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.220 21:30:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:13.220 21:30:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.220 21:30:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:13.220 21:30:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.220 21:30:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.220 21:30:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.220 21:30:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.220 21:30:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.220 21:30:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.220 21:30:02 -- paths/export.sh@5 -- # export PATH 00:27:13.221 21:30:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.221 21:30:02 -- nvmf/common.sh@47 -- # : 0 00:27:13.221 21:30:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:13.221 21:30:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:13.221 21:30:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.221 21:30:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.221 21:30:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.221 21:30:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:13.221 21:30:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:13.221 21:30:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:13.221 21:30:02 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:27:13.221 21:30:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:13.221 21:30:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.221 21:30:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:13.221 21:30:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:13.221 21:30:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:13.221 21:30:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.221 21:30:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:13.221 21:30:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.221 21:30:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:13.221 21:30:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:13.221 21:30:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:13.221 21:30:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:13.221 21:30:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:13.221 21:30:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:13.221 21:30:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.221 21:30:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.221 21:30:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:13.221 21:30:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:13.221 21:30:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:13.221 21:30:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:13.221 21:30:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:13.221 21:30:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.221 21:30:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:13.221 21:30:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:13.221 21:30:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:13.221 21:30:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:13.221 21:30:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:13.221 21:30:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:13.221 Cannot find device "nvmf_tgt_br" 00:27:13.221 21:30:02 -- nvmf/common.sh@155 -- # true 00:27:13.221 21:30:02 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:13.221 Cannot find device "nvmf_tgt_br2" 00:27:13.221 21:30:02 -- nvmf/common.sh@156 -- # true 00:27:13.221 21:30:02 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:13.221 21:30:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:13.221 Cannot find device "nvmf_tgt_br" 00:27:13.221 21:30:02 -- nvmf/common.sh@158 -- # true 00:27:13.221 21:30:02 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:13.221 Cannot find device "nvmf_tgt_br2" 00:27:13.221 21:30:02 -- nvmf/common.sh@159 -- # true 00:27:13.221 21:30:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:13.502 21:30:02 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:13.502 21:30:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:13.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:13.502 21:30:02 -- nvmf/common.sh@162 -- # true 00:27:13.502 21:30:02 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:13.502 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:13.502 21:30:02 -- nvmf/common.sh@163 -- # true 00:27:13.502 21:30:02 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:13.502 21:30:02 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:13.502 21:30:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:13.502 21:30:02 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:13.502 21:30:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:13.502 21:30:02 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:13.502 21:30:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:13.502 21:30:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:13.502 21:30:02 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:13.502 21:30:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:13.502 21:30:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:13.502 21:30:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:13.502 21:30:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:13.502 21:30:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:13.502 21:30:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:13.502 21:30:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:13.502 21:30:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:13.502 21:30:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:13.502 21:30:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:13.502 21:30:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:13.502 21:30:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:13.502 21:30:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:13.502 21:30:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:13.502 21:30:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:13.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:27:13.502 00:27:13.502 --- 10.0.0.2 ping statistics --- 00:27:13.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.502 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:27:13.502 21:30:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:13.502 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:13.502 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:27:13.502 00:27:13.502 --- 10.0.0.3 ping statistics --- 00:27:13.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.502 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:13.503 21:30:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:13.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:27:13.503 00:27:13.503 --- 10.0.0.1 ping statistics --- 00:27:13.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.503 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:27:13.503 21:30:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.503 21:30:02 -- nvmf/common.sh@422 -- # return 0 00:27:13.503 21:30:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:13.503 21:30:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.503 21:30:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:13.503 21:30:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:13.503 21:30:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.503 21:30:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:13.503 21:30:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:13.503 21:30:02 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=95687 00:27:13.503 21:30:02 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:13.503 21:30:02 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:13.503 21:30:02 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 95687 00:27:13.503 21:30:02 -- common/autotest_common.sh@817 -- # '[' -z 95687 ']' 00:27:13.503 21:30:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.503 21:30:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:13.503 21:30:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.503 21:30:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:13.503 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:27:14.497 21:30:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:14.497 21:30:03 -- common/autotest_common.sh@850 -- # return 0 00:27:14.497 21:30:03 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:14.497 21:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.497 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:27:14.498 21:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.498 21:30:03 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:27:14.498 21:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.498 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:27:14.759 Malloc0 00:27:14.759 21:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.759 21:30:03 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.759 21:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.759 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:27:14.759 21:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.759 21:30:03 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:14.759 21:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.759 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:27:14.759 21:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.759 21:30:03 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.759 21:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:14.759 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:27:14.759 21:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:14.759 21:30:03 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:27:14.759 21:30:03 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:15.018 Shutting down the fuzz application 00:27:15.018 21:30:04 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:15.277 Shutting down the fuzz application 00:27:15.277 21:30:04 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.277 21:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.277 21:30:04 -- common/autotest_common.sh@10 -- # set +x 00:27:15.277 21:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.277 21:30:04 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:15.277 21:30:04 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:15.277 21:30:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:15.277 21:30:04 -- nvmf/common.sh@117 -- # sync 00:27:15.277 21:30:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.277 21:30:04 -- nvmf/common.sh@120 -- # set +e 00:27:15.277 21:30:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.277 21:30:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.277 rmmod nvme_tcp 00:27:15.277 rmmod nvme_fabrics 00:27:15.277 rmmod nvme_keyring 00:27:15.536 21:30:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:15.536 21:30:04 -- nvmf/common.sh@124 -- # set -e 00:27:15.536 21:30:04 -- nvmf/common.sh@125 -- # return 0 00:27:15.536 21:30:04 -- nvmf/common.sh@478 -- # '[' -n 95687 ']' 00:27:15.536 21:30:04 -- nvmf/common.sh@479 -- # killprocess 95687 00:27:15.536 21:30:04 -- common/autotest_common.sh@936 -- # '[' -z 95687 ']' 00:27:15.536 21:30:04 -- common/autotest_common.sh@940 -- # kill -0 95687 00:27:15.536 21:30:04 -- common/autotest_common.sh@941 -- # uname 00:27:15.536 21:30:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:15.536 21:30:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95687 00:27:15.536 21:30:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:15.536 21:30:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:15.536 killing process with pid 95687 00:27:15.536 21:30:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95687' 00:27:15.536 21:30:04 -- common/autotest_common.sh@955 -- # kill 95687 00:27:15.536 21:30:04 -- common/autotest_common.sh@960 -- # wait 95687 00:27:15.536 21:30:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:15.536 21:30:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:15.536 21:30:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:15.536 21:30:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.536 21:30:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.536 21:30:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.536 21:30:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.536 21:30:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.794 21:30:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:15.795 21:30:04 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:27:15.795 00:27:15.795 real 0m2.634s 00:27:15.795 user 0m2.725s 00:27:15.795 sys 0m0.658s 00:27:15.795 21:30:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:15.795 21:30:04 -- common/autotest_common.sh@10 -- # set +x 00:27:15.795 ************************************ 00:27:15.795 END TEST nvmf_fuzz 00:27:15.795 ************************************ 00:27:15.795 21:30:04 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:15.795 21:30:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:15.795 21:30:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:15.795 21:30:04 -- common/autotest_common.sh@10 -- # set +x 00:27:15.795 ************************************ 00:27:15.795 START TEST nvmf_multiconnection 00:27:15.795 ************************************ 00:27:15.795 21:30:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:16.054 * Looking for test storage... 00:27:16.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:16.054 21:30:05 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:16.054 21:30:05 -- nvmf/common.sh@7 -- # uname -s 00:27:16.054 21:30:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.054 21:30:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.054 21:30:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.054 21:30:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.054 21:30:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.054 21:30:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.054 21:30:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.054 21:30:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.054 21:30:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.054 21:30:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.054 21:30:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:27:16.054 21:30:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:27:16.054 21:30:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.054 21:30:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.054 21:30:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:16.054 21:30:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.054 21:30:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:16.054 21:30:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.054 21:30:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.054 21:30:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.054 21:30:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.054 21:30:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.054 21:30:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.054 21:30:05 -- paths/export.sh@5 -- # export PATH 00:27:16.054 21:30:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.054 21:30:05 -- nvmf/common.sh@47 -- # : 0 00:27:16.054 21:30:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:16.054 21:30:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:16.054 21:30:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.054 21:30:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.054 21:30:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.054 21:30:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:16.054 21:30:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:16.054 21:30:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:16.054 21:30:05 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.055 21:30:05 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:16.055 21:30:05 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:16.055 21:30:05 -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:16.055 21:30:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:16.055 21:30:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.055 21:30:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:16.055 21:30:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:16.055 21:30:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:16.055 21:30:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.055 21:30:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.055 21:30:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.055 21:30:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:16.055 21:30:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:16.055 21:30:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:16.055 21:30:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:16.055 21:30:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:16.055 21:30:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:16.055 21:30:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.055 21:30:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.055 21:30:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:16.055 21:30:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:16.055 21:30:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:16.055 21:30:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:16.055 21:30:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:16.055 21:30:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.055 21:30:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:16.055 21:30:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:16.055 21:30:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:16.055 21:30:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:16.055 21:30:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:16.055 21:30:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:16.055 Cannot find device "nvmf_tgt_br" 00:27:16.055 21:30:05 -- nvmf/common.sh@155 -- # true 00:27:16.055 21:30:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:16.055 Cannot find device "nvmf_tgt_br2" 00:27:16.055 21:30:05 -- nvmf/common.sh@156 -- # true 00:27:16.055 21:30:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:16.055 21:30:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:16.055 Cannot find device "nvmf_tgt_br" 00:27:16.055 21:30:05 -- nvmf/common.sh@158 -- # true 00:27:16.055 21:30:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:16.055 Cannot find device "nvmf_tgt_br2" 00:27:16.055 21:30:05 -- nvmf/common.sh@159 -- # true 00:27:16.055 21:30:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:16.055 21:30:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:16.314 21:30:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:16.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:16.314 21:30:05 -- nvmf/common.sh@162 -- # true 00:27:16.314 21:30:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:16.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:16.314 21:30:05 -- nvmf/common.sh@163 -- # true 00:27:16.314 21:30:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:16.314 21:30:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:16.314 21:30:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:16.314 21:30:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:16.314 21:30:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:16.314 21:30:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:16.314 21:30:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:16.314 21:30:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:16.314 21:30:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:16.314 21:30:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:16.314 21:30:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:16.314 21:30:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:16.314 21:30:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:16.314 21:30:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:16.314 21:30:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:16.314 21:30:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:16.314 21:30:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:16.314 21:30:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:16.314 21:30:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:16.314 21:30:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:16.314 21:30:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:16.314 21:30:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:16.314 21:30:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:16.314 21:30:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:16.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:27:16.314 00:27:16.314 --- 10.0.0.2 ping statistics --- 00:27:16.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.314 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:27:16.314 21:30:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:16.314 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:16.314 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:27:16.314 00:27:16.314 --- 10.0.0.3 ping statistics --- 00:27:16.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.314 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:16.314 21:30:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:16.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:27:16.314 00:27:16.314 --- 10.0.0.1 ping statistics --- 00:27:16.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.314 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:27:16.314 21:30:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.314 21:30:05 -- nvmf/common.sh@422 -- # return 0 00:27:16.314 21:30:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:16.314 21:30:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.314 21:30:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:16.314 21:30:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:16.314 21:30:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.314 21:30:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:16.314 21:30:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:16.314 21:30:05 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:16.314 21:30:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:16.314 21:30:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:16.314 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:27:16.314 21:30:05 -- nvmf/common.sh@470 -- # nvmfpid=95893 00:27:16.314 21:30:05 -- nvmf/common.sh@471 -- # waitforlisten 95893 00:27:16.314 21:30:05 -- common/autotest_common.sh@817 -- # '[' -z 95893 ']' 00:27:16.314 21:30:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.315 21:30:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:16.315 21:30:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:16.315 21:30:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.315 21:30:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:16.315 21:30:05 -- common/autotest_common.sh@10 -- # set +x 00:27:16.573 [2024-04-26 21:30:05.593921] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:16.573 [2024-04-26 21:30:05.594014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.573 [2024-04-26 21:30:05.735831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.573 [2024-04-26 21:30:05.806195] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.573 [2024-04-26 21:30:05.806318] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.573 [2024-04-26 21:30:05.806360] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.573 [2024-04-26 21:30:05.806376] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.573 [2024-04-26 21:30:05.806410] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.573 [2024-04-26 21:30:05.806658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.573 [2024-04-26 21:30:05.806931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.573 [2024-04-26 21:30:05.806732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.573 [2024-04-26 21:30:05.806939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.506 21:30:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:17.506 21:30:06 -- common/autotest_common.sh@850 -- # return 0 00:27:17.506 21:30:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:17.506 21:30:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 21:30:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.506 21:30:06 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 [2024-04-26 21:30:06.545266] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@21 -- # seq 1 11 00:27:17.506 21:30:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.506 21:30:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 Malloc1 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 [2024-04-26 21:30:06.623670] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.506 21:30:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 Malloc2 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.506 21:30:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 Malloc3 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.506 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.506 21:30:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.506 21:30:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:17.506 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.506 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 Malloc4 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.766 21:30:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 Malloc5 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.766 21:30:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 Malloc6 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.766 21:30:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:17.766 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.766 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.766 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.767 21:30:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:17.767 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.767 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.767 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.767 21:30:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.767 21:30:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:17.767 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.767 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.767 Malloc7 00:27:17.767 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.767 21:30:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:17.767 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.767 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.767 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.767 21:30:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:17.767 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.767 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.767 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.767 21:30:06 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:17.767 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.767 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.767 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.767 21:30:06 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:17.767 21:30:06 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:17.767 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.767 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.767 Malloc8 00:27:17.767 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.767 21:30:06 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:17.767 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.767 21:30:06 -- common/autotest_common.sh@10 -- # set +x 00:27:17.767 21:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.767 21:30:06 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:17.767 21:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.767 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:17.767 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:17.767 21:30:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:17.767 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:17.767 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.037 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.037 21:30:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:18.037 21:30:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:18.037 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.037 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.037 Malloc9 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:18.038 21:30:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 Malloc10 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:18.038 21:30:07 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 Malloc11 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:18.038 21:30:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:18.038 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:27:18.038 21:30:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:18.038 21:30:07 -- target/multiconnection.sh@28 -- # seq 1 11 00:27:18.038 21:30:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:18.038 21:30:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:18.304 21:30:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:18.304 21:30:07 -- common/autotest_common.sh@1184 -- # local i=0 00:27:18.304 21:30:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:18.304 21:30:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:18.304 21:30:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:20.209 21:30:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:20.209 21:30:09 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:27:20.209 21:30:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:20.209 21:30:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:20.209 21:30:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:20.209 21:30:09 -- common/autotest_common.sh@1194 -- # return 0 00:27:20.209 21:30:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:20.209 21:30:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:20.469 21:30:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:20.469 21:30:09 -- common/autotest_common.sh@1184 -- # local i=0 00:27:20.469 21:30:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:20.469 21:30:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:20.469 21:30:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:22.384 21:30:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:22.384 21:30:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:22.384 21:30:11 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:27:22.384 21:30:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:22.384 21:30:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:22.384 21:30:11 -- common/autotest_common.sh@1194 -- # return 0 00:27:22.384 21:30:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:22.384 21:30:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:22.646 21:30:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:22.646 21:30:11 -- common/autotest_common.sh@1184 -- # local i=0 00:27:22.646 21:30:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:22.646 21:30:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:22.646 21:30:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:24.568 21:30:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:24.568 21:30:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:24.568 21:30:13 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:27:24.568 21:30:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:24.568 21:30:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:24.568 21:30:13 -- common/autotest_common.sh@1194 -- # return 0 00:27:24.568 21:30:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.568 21:30:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:24.826 21:30:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:24.826 21:30:13 -- common/autotest_common.sh@1184 -- # local i=0 00:27:24.826 21:30:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:24.826 21:30:13 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:24.826 21:30:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:26.732 21:30:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:26.732 21:30:15 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:27:26.732 21:30:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:26.732 21:30:15 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:26.732 21:30:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:26.732 21:30:15 -- common/autotest_common.sh@1194 -- # return 0 00:27:26.732 21:30:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:26.732 21:30:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:26.991 21:30:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:26.991 21:30:16 -- common/autotest_common.sh@1184 -- # local i=0 00:27:26.991 21:30:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:26.991 21:30:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:26.991 21:30:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:28.986 21:30:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:28.986 21:30:18 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:27:28.986 21:30:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:28.986 21:30:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:28.986 21:30:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:28.986 21:30:18 -- common/autotest_common.sh@1194 -- # return 0 00:27:28.986 21:30:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:28.986 21:30:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:29.244 21:30:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:29.244 21:30:18 -- common/autotest_common.sh@1184 -- # local i=0 00:27:29.244 21:30:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:29.244 21:30:18 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:29.244 21:30:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:31.144 21:30:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:31.144 21:30:20 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:27:31.144 21:30:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:31.144 21:30:20 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:31.144 21:30:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:31.144 21:30:20 -- common/autotest_common.sh@1194 -- # return 0 00:27:31.144 21:30:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.144 21:30:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:31.403 21:30:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:31.403 21:30:20 -- common/autotest_common.sh@1184 -- # local i=0 00:27:31.403 21:30:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:31.403 21:30:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:31.403 21:30:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:33.302 21:30:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:33.561 21:30:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:33.561 21:30:22 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:27:33.561 21:30:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:33.561 21:30:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:33.561 21:30:22 -- common/autotest_common.sh@1194 -- # return 0 00:27:33.561 21:30:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:33.561 21:30:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:33.561 21:30:22 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:33.561 21:30:22 -- common/autotest_common.sh@1184 -- # local i=0 00:27:33.561 21:30:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:33.561 21:30:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:33.561 21:30:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:36.089 21:30:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:36.089 21:30:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:36.089 21:30:24 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:27:36.089 21:30:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:36.089 21:30:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:36.089 21:30:24 -- common/autotest_common.sh@1194 -- # return 0 00:27:36.089 21:30:24 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.089 21:30:24 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:36.089 21:30:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:36.089 21:30:24 -- common/autotest_common.sh@1184 -- # local i=0 00:27:36.089 21:30:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:36.089 21:30:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:36.090 21:30:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:38.010 21:30:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:38.010 21:30:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:38.010 21:30:26 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:27:38.010 21:30:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:38.010 21:30:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:38.011 21:30:26 -- common/autotest_common.sh@1194 -- # return 0 00:27:38.011 21:30:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.011 21:30:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:38.011 21:30:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:38.011 21:30:27 -- common/autotest_common.sh@1184 -- # local i=0 00:27:38.011 21:30:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:38.011 21:30:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:38.011 21:30:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:39.910 21:30:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:39.910 21:30:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:39.910 21:30:29 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:27:39.910 21:30:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:39.910 21:30:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:39.910 21:30:29 -- common/autotest_common.sh@1194 -- # return 0 00:27:39.910 21:30:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.910 21:30:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:40.167 21:30:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:40.167 21:30:29 -- common/autotest_common.sh@1184 -- # local i=0 00:27:40.167 21:30:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:27:40.167 21:30:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:27:40.167 21:30:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:27:42.085 21:30:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:27:42.085 21:30:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:27:42.085 21:30:31 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:27:42.085 21:30:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:27:42.085 21:30:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:27:42.085 21:30:31 -- common/autotest_common.sh@1194 -- # return 0 00:27:42.085 21:30:31 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:42.346 [global] 00:27:42.346 thread=1 00:27:42.346 invalidate=1 00:27:42.346 rw=read 00:27:42.346 time_based=1 00:27:42.346 runtime=10 00:27:42.346 ioengine=libaio 00:27:42.346 direct=1 00:27:42.346 bs=262144 00:27:42.346 iodepth=64 00:27:42.346 norandommap=1 00:27:42.346 numjobs=1 00:27:42.346 00:27:42.346 [job0] 00:27:42.346 filename=/dev/nvme0n1 00:27:42.346 [job1] 00:27:42.346 filename=/dev/nvme10n1 00:27:42.346 [job2] 00:27:42.346 filename=/dev/nvme1n1 00:27:42.346 [job3] 00:27:42.346 filename=/dev/nvme2n1 00:27:42.346 [job4] 00:27:42.346 filename=/dev/nvme3n1 00:27:42.346 [job5] 00:27:42.346 filename=/dev/nvme4n1 00:27:42.346 [job6] 00:27:42.346 filename=/dev/nvme5n1 00:27:42.346 [job7] 00:27:42.346 filename=/dev/nvme6n1 00:27:42.346 [job8] 00:27:42.346 filename=/dev/nvme7n1 00:27:42.346 [job9] 00:27:42.346 filename=/dev/nvme8n1 00:27:42.346 [job10] 00:27:42.346 filename=/dev/nvme9n1 00:27:42.605 Could not set queue depth (nvme0n1) 00:27:42.605 Could not set queue depth (nvme10n1) 00:27:42.605 Could not set queue depth (nvme1n1) 00:27:42.605 Could not set queue depth (nvme2n1) 00:27:42.605 Could not set queue depth (nvme3n1) 00:27:42.605 Could not set queue depth (nvme4n1) 00:27:42.605 Could not set queue depth (nvme5n1) 00:27:42.605 Could not set queue depth (nvme6n1) 00:27:42.605 Could not set queue depth (nvme7n1) 00:27:42.605 Could not set queue depth (nvme8n1) 00:27:42.605 Could not set queue depth (nvme9n1) 00:27:42.605 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:42.605 fio-3.35 00:27:42.605 Starting 11 threads 00:27:54.812 00:27:54.812 job0: (groupid=0, jobs=1): err= 0: pid=96372: Fri Apr 26 21:30:42 2024 00:27:54.812 read: IOPS=524, BW=131MiB/s (137MB/s)(1324MiB/10096msec) 00:27:54.812 slat (usec): min=15, max=82995, avg=1858.98, stdev=6881.74 00:27:54.812 clat (msec): min=23, max=206, avg=119.97, stdev=25.91 00:27:54.812 lat (msec): min=23, max=237, avg=121.83, stdev=26.97 00:27:54.812 clat percentiles (msec): 00:27:54.812 | 1.00th=[ 55], 5.00th=[ 82], 10.00th=[ 89], 20.00th=[ 96], 00:27:54.812 | 30.00th=[ 105], 40.00th=[ 114], 50.00th=[ 121], 60.00th=[ 128], 00:27:54.812 | 70.00th=[ 136], 80.00th=[ 146], 90.00th=[ 153], 95.00th=[ 159], 00:27:54.812 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 201], 00:27:54.812 | 99.99th=[ 207] 00:27:54.812 bw ( KiB/s): min=103424, max=173732, per=6.95%, avg=133880.85, stdev=23419.96, samples=20 00:27:54.812 iops : min= 404, max= 678, avg=522.90, stdev=91.42, samples=20 00:27:54.812 lat (msec) : 50=0.96%, 100=23.91%, 250=75.13% 00:27:54.812 cpu : usr=0.26%, sys=2.52%, ctx=1221, majf=0, minf=4097 00:27:54.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.812 issued rwts: total=5295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.812 job1: (groupid=0, jobs=1): err= 0: pid=96373: Fri Apr 26 21:30:42 2024 00:27:54.812 read: IOPS=929, BW=232MiB/s (244MB/s)(2342MiB/10078msec) 00:27:54.812 slat (usec): min=15, max=64883, avg=988.62, stdev=3911.76 00:27:54.812 clat (usec): min=870, max=218813, avg=67756.68, stdev=34983.58 00:27:54.812 lat (usec): min=928, max=229484, avg=68745.30, stdev=35574.75 00:27:54.812 clat percentiles (msec): 00:27:54.812 | 1.00th=[ 16], 5.00th=[ 25], 10.00th=[ 29], 20.00th=[ 36], 00:27:54.812 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 68], 00:27:54.812 | 70.00th=[ 73], 80.00th=[ 88], 90.00th=[ 122], 95.00th=[ 146], 00:27:54.812 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 186], 99.95th=[ 199], 00:27:54.812 | 99.99th=[ 220] 00:27:54.812 bw ( KiB/s): min=107008, max=526336, per=12.36%, avg=238155.95, stdev=115279.40, samples=20 00:27:54.812 iops : min= 418, max= 2056, avg=930.25, stdev=450.30, samples=20 00:27:54.812 lat (usec) : 1000=0.01% 00:27:54.812 lat (msec) : 2=0.01%, 4=0.41%, 10=0.29%, 20=1.96%, 50=26.83% 00:27:54.812 lat (msec) : 100=54.80%, 250=15.69% 00:27:54.812 cpu : usr=0.43%, sys=4.56%, ctx=2291, majf=0, minf=4097 00:27:54.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:27:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.812 issued rwts: total=9368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.812 job2: (groupid=0, jobs=1): err= 0: pid=96374: Fri Apr 26 21:30:42 2024 00:27:54.812 read: IOPS=524, BW=131MiB/s (138MB/s)(1323MiB/10089msec) 00:27:54.812 slat (usec): min=15, max=93293, avg=1786.44, stdev=7238.86 00:27:54.812 clat (msec): min=28, max=247, avg=119.97, stdev=28.48 00:27:54.812 lat (msec): min=28, max=262, avg=121.76, stdev=29.70 00:27:54.812 clat percentiles (msec): 00:27:54.812 | 1.00th=[ 50], 5.00th=[ 68], 10.00th=[ 85], 20.00th=[ 95], 00:27:54.812 | 30.00th=[ 106], 40.00th=[ 115], 50.00th=[ 123], 60.00th=[ 131], 00:27:54.812 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 155], 95.00th=[ 161], 00:27:54.812 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 215], 99.95th=[ 232], 00:27:54.812 | 99.99th=[ 249] 00:27:54.812 bw ( KiB/s): min=96063, max=190464, per=6.94%, avg=133812.50, stdev=26558.10, samples=20 00:27:54.812 iops : min= 375, max= 744, avg=522.60, stdev=103.74, samples=20 00:27:54.812 lat (msec) : 50=1.36%, 100=23.65%, 250=74.99% 00:27:54.812 cpu : usr=0.19%, sys=2.60%, ctx=1262, majf=0, minf=4097 00:27:54.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:54.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.812 issued rwts: total=5293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.812 job3: (groupid=0, jobs=1): err= 0: pid=96375: Fri Apr 26 21:30:42 2024 00:27:54.812 read: IOPS=1132, BW=283MiB/s (297MB/s)(2856MiB/10089msec) 00:27:54.813 slat (usec): min=15, max=80751, avg=850.18, stdev=3540.01 00:27:54.813 clat (msec): min=5, max=188, avg=55.60, stdev=25.77 00:27:54.813 lat (msec): min=5, max=226, avg=56.45, stdev=26.24 00:27:54.813 clat percentiles (msec): 00:27:54.813 | 1.00th=[ 18], 5.00th=[ 25], 10.00th=[ 29], 20.00th=[ 34], 00:27:54.813 | 30.00th=[ 40], 40.00th=[ 46], 50.00th=[ 55], 60.00th=[ 59], 00:27:54.813 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 85], 95.00th=[ 105], 00:27:54.813 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 182], 99.95th=[ 182], 00:27:54.813 | 99.99th=[ 188] 00:27:54.813 bw ( KiB/s): min=115530, max=524800, per=15.08%, avg=290644.80, stdev=95648.31, samples=20 00:27:54.813 iops : min= 451, max= 2050, avg=1135.25, stdev=373.60, samples=20 00:27:54.813 lat (msec) : 10=0.44%, 20=1.48%, 50=41.98%, 100=50.36%, 250=5.74% 00:27:54.813 cpu : usr=0.41%, sys=5.15%, ctx=2005, majf=0, minf=4097 00:27:54.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:27:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.813 issued rwts: total=11422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.813 job4: (groupid=0, jobs=1): err= 0: pid=96376: Fri Apr 26 21:30:42 2024 00:27:54.813 read: IOPS=545, BW=136MiB/s (143MB/s)(1373MiB/10072msec) 00:27:54.813 slat (usec): min=17, max=114758, avg=1786.86, stdev=6519.09 00:27:54.813 clat (msec): min=22, max=246, avg=115.39, stdev=25.25 00:27:54.813 lat (msec): min=23, max=246, avg=117.18, stdev=26.22 00:27:54.813 clat percentiles (msec): 00:27:54.813 | 1.00th=[ 54], 5.00th=[ 64], 10.00th=[ 75], 20.00th=[ 101], 00:27:54.813 | 30.00th=[ 110], 40.00th=[ 115], 50.00th=[ 120], 60.00th=[ 124], 00:27:54.813 | 70.00th=[ 127], 80.00th=[ 132], 90.00th=[ 142], 95.00th=[ 150], 00:27:54.813 | 99.00th=[ 180], 99.50th=[ 197], 99.90th=[ 209], 99.95th=[ 224], 00:27:54.813 | 99.99th=[ 247] 00:27:54.813 bw ( KiB/s): min=106794, max=215552, per=7.20%, avg=138802.00, stdev=24869.86, samples=20 00:27:54.813 iops : min= 417, max= 842, avg=542.00, stdev=97.20, samples=20 00:27:54.813 lat (msec) : 50=0.62%, 100=19.36%, 250=80.02% 00:27:54.813 cpu : usr=0.19%, sys=2.95%, ctx=1119, majf=0, minf=4097 00:27:54.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.813 issued rwts: total=5491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.813 job5: (groupid=0, jobs=1): err= 0: pid=96377: Fri Apr 26 21:30:42 2024 00:27:54.813 read: IOPS=506, BW=127MiB/s (133MB/s)(1280MiB/10096msec) 00:27:54.813 slat (usec): min=16, max=84253, avg=1950.37, stdev=6617.56 00:27:54.813 clat (msec): min=22, max=204, avg=124.11, stdev=25.43 00:27:54.813 lat (msec): min=22, max=232, avg=126.06, stdev=26.39 00:27:54.813 clat percentiles (msec): 00:27:54.813 | 1.00th=[ 77], 5.00th=[ 89], 10.00th=[ 93], 20.00th=[ 101], 00:27:54.813 | 30.00th=[ 109], 40.00th=[ 117], 50.00th=[ 124], 60.00th=[ 130], 00:27:54.813 | 70.00th=[ 138], 80.00th=[ 148], 90.00th=[ 159], 95.00th=[ 165], 00:27:54.813 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 205], 99.95th=[ 205], 00:27:54.813 | 99.99th=[ 205] 00:27:54.813 bw ( KiB/s): min=92160, max=171688, per=6.71%, avg=129240.25, stdev=22043.23, samples=20 00:27:54.813 iops : min= 360, max= 670, avg=504.70, stdev=85.98, samples=20 00:27:54.813 lat (msec) : 50=0.59%, 100=19.19%, 250=80.23% 00:27:54.813 cpu : usr=0.27%, sys=2.53%, ctx=1140, majf=0, minf=4097 00:27:54.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.813 issued rwts: total=5118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.813 job6: (groupid=0, jobs=1): err= 0: pid=96378: Fri Apr 26 21:30:42 2024 00:27:54.813 read: IOPS=601, BW=150MiB/s (158MB/s)(1517MiB/10093msec) 00:27:54.813 slat (usec): min=13, max=107899, avg=1572.85, stdev=6444.95 00:27:54.813 clat (msec): min=8, max=236, avg=104.72, stdev=40.89 00:27:54.813 lat (msec): min=8, max=259, avg=106.30, stdev=41.90 00:27:54.813 clat percentiles (msec): 00:27:54.813 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 51], 20.00th=[ 63], 00:27:54.813 | 30.00th=[ 73], 40.00th=[ 96], 50.00th=[ 116], 60.00th=[ 124], 00:27:54.813 | 70.00th=[ 132], 80.00th=[ 142], 90.00th=[ 155], 95.00th=[ 165], 00:27:54.813 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 209], 99.95th=[ 213], 00:27:54.813 | 99.99th=[ 236] 00:27:54.813 bw ( KiB/s): min=93696, max=264192, per=7.97%, avg=153627.55, stdev=50986.15, samples=20 00:27:54.813 iops : min= 366, max= 1032, avg=600.05, stdev=199.14, samples=20 00:27:54.813 lat (msec) : 10=0.16%, 20=1.55%, 50=8.06%, 100=31.59%, 250=58.64% 00:27:54.813 cpu : usr=0.15%, sys=2.99%, ctx=1510, majf=0, minf=4097 00:27:54.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.813 issued rwts: total=6068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.813 job7: (groupid=0, jobs=1): err= 0: pid=96379: Fri Apr 26 21:30:42 2024 00:27:54.813 read: IOPS=544, BW=136MiB/s (143MB/s)(1372MiB/10075msec) 00:27:54.813 slat (usec): min=14, max=90235, avg=1760.73, stdev=6149.37 00:27:54.813 clat (msec): min=19, max=213, avg=115.55, stdev=29.00 00:27:54.813 lat (msec): min=19, max=213, avg=117.31, stdev=29.95 00:27:54.813 clat percentiles (msec): 00:27:54.813 | 1.00th=[ 38], 5.00th=[ 55], 10.00th=[ 64], 20.00th=[ 101], 00:27:54.813 | 30.00th=[ 113], 40.00th=[ 118], 50.00th=[ 124], 60.00th=[ 127], 00:27:54.813 | 70.00th=[ 131], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 153], 00:27:54.813 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 209], 00:27:54.813 | 99.99th=[ 213] 00:27:54.813 bw ( KiB/s): min=101280, max=260087, per=7.20%, avg=138760.25, stdev=33942.49, samples=20 00:27:54.813 iops : min= 395, max= 1015, avg=541.90, stdev=132.45, samples=20 00:27:54.813 lat (msec) : 20=0.04%, 50=3.32%, 100=16.57%, 250=80.08% 00:27:54.813 cpu : usr=0.19%, sys=2.84%, ctx=1365, majf=0, minf=4097 00:27:54.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.813 issued rwts: total=5486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.813 job8: (groupid=0, jobs=1): err= 0: pid=96380: Fri Apr 26 21:30:42 2024 00:27:54.813 read: IOPS=600, BW=150MiB/s (157MB/s)(1512MiB/10066msec) 00:27:54.813 slat (usec): min=15, max=107474, avg=1607.81, stdev=6364.35 00:27:54.813 clat (msec): min=2, max=253, avg=104.80, stdev=40.23 00:27:54.813 lat (msec): min=2, max=259, avg=106.40, stdev=41.22 00:27:54.813 clat percentiles (msec): 00:27:54.813 | 1.00th=[ 18], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 66], 00:27:54.813 | 30.00th=[ 102], 40.00th=[ 113], 50.00th=[ 120], 60.00th=[ 125], 00:27:54.813 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 142], 95.00th=[ 150], 00:27:54.813 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 192], 00:27:54.813 | 99.99th=[ 253] 00:27:54.813 bw ( KiB/s): min=99129, max=374272, per=7.95%, avg=153126.00, stdev=72941.82, samples=20 00:27:54.813 iops : min= 387, max= 1462, avg=598.10, stdev=284.93, samples=20 00:27:54.813 lat (msec) : 4=0.07%, 10=0.43%, 20=1.57%, 50=16.37%, 100=10.92% 00:27:54.813 lat (msec) : 250=70.63%, 500=0.02% 00:27:54.813 cpu : usr=0.24%, sys=2.82%, ctx=1454, majf=0, minf=4097 00:27:54.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.813 issued rwts: total=6046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.813 job9: (groupid=0, jobs=1): err= 0: pid=96381: Fri Apr 26 21:30:42 2024 00:27:54.813 read: IOPS=649, BW=162MiB/s (170MB/s)(1637MiB/10084msec) 00:27:54.813 slat (usec): min=16, max=111965, avg=1491.13, stdev=6023.84 00:27:54.813 clat (msec): min=17, max=247, avg=96.93, stdev=45.46 00:27:54.813 lat (msec): min=18, max=263, avg=98.42, stdev=46.41 00:27:54.813 clat percentiles (msec): 00:27:54.813 | 1.00th=[ 23], 5.00th=[ 25], 10.00th=[ 29], 20.00th=[ 35], 00:27:54.813 | 30.00th=[ 67], 40.00th=[ 97], 50.00th=[ 115], 60.00th=[ 124], 00:27:54.813 | 70.00th=[ 131], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 153], 00:27:54.813 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 199], 99.95th=[ 249], 00:27:54.813 | 99.99th=[ 249] 00:27:54.813 bw ( KiB/s): min=95232, max=572295, per=8.61%, avg=165910.85, stdev=111764.54, samples=20 00:27:54.813 iops : min= 372, max= 2235, avg=647.95, stdev=436.51, samples=20 00:27:54.813 lat (msec) : 20=0.64%, 50=24.82%, 100=15.32%, 250=59.22% 00:27:54.813 cpu : usr=0.31%, sys=3.48%, ctx=1794, majf=0, minf=4097 00:27:54.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:27:54.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.813 issued rwts: total=6548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.813 job10: (groupid=0, jobs=1): err= 0: pid=96382: Fri Apr 26 21:30:42 2024 00:27:54.813 read: IOPS=976, BW=244MiB/s (256MB/s)(2466MiB/10097msec) 00:27:54.813 slat (usec): min=15, max=60845, avg=948.79, stdev=3822.24 00:27:54.813 clat (msec): min=11, max=196, avg=64.44, stdev=43.80 00:27:54.813 lat (msec): min=11, max=223, avg=65.38, stdev=44.52 00:27:54.813 clat percentiles (msec): 00:27:54.813 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 26], 20.00th=[ 30], 00:27:54.813 | 30.00th=[ 34], 40.00th=[ 38], 50.00th=[ 47], 60.00th=[ 62], 00:27:54.813 | 70.00th=[ 73], 80.00th=[ 95], 90.00th=[ 148], 95.00th=[ 157], 00:27:54.813 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 192], 00:27:54.813 | 99.99th=[ 197] 00:27:54.813 bw ( KiB/s): min=99840, max=541696, per=13.02%, avg=250793.45, stdev=155633.84, samples=20 00:27:54.813 iops : min= 390, max= 2116, avg=979.60, stdev=607.99, samples=20 00:27:54.814 lat (msec) : 20=3.06%, 50=48.20%, 100=29.79%, 250=18.96% 00:27:54.814 cpu : usr=0.24%, sys=4.98%, ctx=2570, majf=0, minf=4097 00:27:54.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:54.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:54.814 issued rwts: total=9864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.814 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:54.814 00:27:54.814 Run status group 0 (all jobs): 00:27:54.814 READ: bw=1882MiB/s (1973MB/s), 127MiB/s-283MiB/s (133MB/s-297MB/s), io=18.6GiB (19.9GB), run=10066-10097msec 00:27:54.814 00:27:54.814 Disk stats (read/write): 00:27:54.814 nvme0n1: ios=10550/0, merge=0/0, ticks=1245536/0, in_queue=1245536, util=97.67% 00:27:54.814 nvme10n1: ios=18650/0, merge=0/0, ticks=1229728/0, in_queue=1229728, util=97.22% 00:27:54.814 nvme1n1: ios=10534/0, merge=0/0, ticks=1242704/0, in_queue=1242704, util=97.50% 00:27:54.814 nvme2n1: ios=22811/0, merge=0/0, ticks=1237001/0, in_queue=1237001, util=98.13% 00:27:54.814 nvme3n1: ios=10895/0, merge=0/0, ticks=1242503/0, in_queue=1242503, util=98.27% 00:27:54.814 nvme4n1: ios=10170/0, merge=0/0, ticks=1245197/0, in_queue=1245197, util=98.28% 00:27:54.814 nvme5n1: ios=12058/0, merge=0/0, ticks=1236308/0, in_queue=1236308, util=97.80% 00:27:54.814 nvme6n1: ios=10946/0, merge=0/0, ticks=1244647/0, in_queue=1244647, util=97.91% 00:27:54.814 nvme7n1: ios=12043/0, merge=0/0, ticks=1243390/0, in_queue=1243390, util=98.50% 00:27:54.814 nvme8n1: ios=12989/0, merge=0/0, ticks=1235272/0, in_queue=1235272, util=98.49% 00:27:54.814 nvme9n1: ios=19698/0, merge=0/0, ticks=1230948/0, in_queue=1230948, util=98.14% 00:27:54.814 21:30:42 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:54.814 [global] 00:27:54.814 thread=1 00:27:54.814 invalidate=1 00:27:54.814 rw=randwrite 00:27:54.814 time_based=1 00:27:54.814 runtime=10 00:27:54.814 ioengine=libaio 00:27:54.814 direct=1 00:27:54.814 bs=262144 00:27:54.814 iodepth=64 00:27:54.814 norandommap=1 00:27:54.814 numjobs=1 00:27:54.814 00:27:54.814 [job0] 00:27:54.814 filename=/dev/nvme0n1 00:27:54.814 [job1] 00:27:54.814 filename=/dev/nvme10n1 00:27:54.814 [job2] 00:27:54.814 filename=/dev/nvme1n1 00:27:54.814 [job3] 00:27:54.814 filename=/dev/nvme2n1 00:27:54.814 [job4] 00:27:54.814 filename=/dev/nvme3n1 00:27:54.814 [job5] 00:27:54.814 filename=/dev/nvme4n1 00:27:54.814 [job6] 00:27:54.814 filename=/dev/nvme5n1 00:27:54.814 [job7] 00:27:54.814 filename=/dev/nvme6n1 00:27:54.814 [job8] 00:27:54.814 filename=/dev/nvme7n1 00:27:54.814 [job9] 00:27:54.814 filename=/dev/nvme8n1 00:27:54.814 [job10] 00:27:54.814 filename=/dev/nvme9n1 00:27:54.814 Could not set queue depth (nvme0n1) 00:27:54.814 Could not set queue depth (nvme10n1) 00:27:54.814 Could not set queue depth (nvme1n1) 00:27:54.814 Could not set queue depth (nvme2n1) 00:27:54.814 Could not set queue depth (nvme3n1) 00:27:54.814 Could not set queue depth (nvme4n1) 00:27:54.814 Could not set queue depth (nvme5n1) 00:27:54.814 Could not set queue depth (nvme6n1) 00:27:54.814 Could not set queue depth (nvme7n1) 00:27:54.814 Could not set queue depth (nvme8n1) 00:27:54.814 Could not set queue depth (nvme9n1) 00:27:54.814 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:54.814 fio-3.35 00:27:54.814 Starting 11 threads 00:28:04.835 00:28:04.835 job0: (groupid=0, jobs=1): err= 0: pid=96577: Fri Apr 26 21:30:52 2024 00:28:04.835 write: IOPS=456, BW=114MiB/s (120MB/s)(1154MiB/10116msec); 0 zone resets 00:28:04.835 slat (usec): min=17, max=50491, avg=2144.60, stdev=3948.99 00:28:04.835 clat (msec): min=4, max=247, avg=138.12, stdev=40.73 00:28:04.835 lat (msec): min=6, max=247, avg=140.27, stdev=41.24 00:28:04.835 clat percentiles (msec): 00:28:04.835 | 1.00th=[ 27], 5.00th=[ 69], 10.00th=[ 72], 20.00th=[ 105], 00:28:04.835 | 30.00th=[ 125], 40.00th=[ 144], 50.00th=[ 150], 60.00th=[ 153], 00:28:04.835 | 70.00th=[ 157], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 188], 00:28:04.835 | 99.00th=[ 190], 99.50th=[ 197], 99.90th=[ 239], 99.95th=[ 239], 00:28:04.835 | 99.99th=[ 249] 00:28:04.835 bw ( KiB/s): min=86016, max=226304, per=7.22%, avg=116502.25, stdev=36060.38, samples=20 00:28:04.835 iops : min= 336, max= 884, avg=455.05, stdev=140.86, samples=20 00:28:04.835 lat (msec) : 10=0.07%, 20=0.24%, 50=1.47%, 100=17.62%, 250=80.60% 00:28:04.835 cpu : usr=1.10%, sys=1.93%, ctx=5721, majf=0, minf=1 00:28:04.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:28:04.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.835 issued rwts: total=0,4614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.835 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.835 job1: (groupid=0, jobs=1): err= 0: pid=96578: Fri Apr 26 21:30:52 2024 00:28:04.835 write: IOPS=429, BW=107MiB/s (113MB/s)(1093MiB/10189msec); 0 zone resets 00:28:04.835 slat (usec): min=23, max=45898, avg=2285.19, stdev=4248.56 00:28:04.835 clat (msec): min=4, max=364, avg=146.69, stdev=42.48 00:28:04.835 lat (msec): min=5, max=364, avg=148.98, stdev=42.88 00:28:04.835 clat percentiles (msec): 00:28:04.835 | 1.00th=[ 82], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 114], 00:28:04.835 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 142], 60.00th=[ 150], 00:28:04.835 | 70.00th=[ 153], 80.00th=[ 178], 90.00th=[ 220], 95.00th=[ 228], 00:28:04.835 | 99.00th=[ 241], 99.50th=[ 305], 99.90th=[ 355], 99.95th=[ 355], 00:28:04.835 | 99.99th=[ 363] 00:28:04.835 bw ( KiB/s): min=73580, max=145408, per=6.84%, avg=110284.85, stdev=26996.32, samples=20 00:28:04.835 iops : min= 287, max= 568, avg=430.70, stdev=105.46, samples=20 00:28:04.835 lat (msec) : 10=0.09%, 20=0.09%, 50=0.37%, 100=0.75%, 250=97.83% 00:28:04.835 lat (msec) : 500=0.87% 00:28:04.835 cpu : usr=0.96%, sys=1.36%, ctx=5096, majf=0, minf=1 00:28:04.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:04.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.835 issued rwts: total=0,4373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.835 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.835 job2: (groupid=0, jobs=1): err= 0: pid=96586: Fri Apr 26 21:30:52 2024 00:28:04.835 write: IOPS=882, BW=221MiB/s (231MB/s)(2244MiB/10169msec); 0 zone resets 00:28:04.835 slat (usec): min=19, max=49654, avg=1071.79, stdev=2480.78 00:28:04.835 clat (msec): min=7, max=373, avg=71.41, stdev=50.25 00:28:04.835 lat (msec): min=7, max=373, avg=72.48, stdev=50.91 00:28:04.835 clat percentiles (msec): 00:28:04.835 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:28:04.835 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 45], 60.00th=[ 74], 00:28:04.835 | 70.00th=[ 78], 80.00th=[ 80], 90.00th=[ 140], 95.00th=[ 211], 00:28:04.835 | 99.00th=[ 228], 99.50th=[ 234], 99.90th=[ 351], 99.95th=[ 363], 00:28:04.835 | 99.99th=[ 372] 00:28:04.835 bw ( KiB/s): min=73875, max=404992, per=14.14%, avg=228048.20, stdev=120603.00, samples=20 00:28:04.835 iops : min= 288, max= 1582, avg=890.65, stdev=471.07, samples=20 00:28:04.835 lat (msec) : 10=0.04%, 20=0.26%, 50=52.16%, 100=37.28%, 250=9.88% 00:28:04.835 lat (msec) : 500=0.38% 00:28:04.835 cpu : usr=2.16%, sys=3.13%, ctx=11993, majf=0, minf=1 00:28:04.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:28:04.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.835 issued rwts: total=0,8975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.835 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.835 job3: (groupid=0, jobs=1): err= 0: pid=96591: Fri Apr 26 21:30:52 2024 00:28:04.835 write: IOPS=430, BW=108MiB/s (113MB/s)(1095MiB/10170msec); 0 zone resets 00:28:04.835 slat (usec): min=21, max=52767, avg=2280.30, stdev=4244.44 00:28:04.835 clat (msec): min=12, max=375, avg=146.25, stdev=42.62 00:28:04.835 lat (msec): min=12, max=375, avg=148.53, stdev=43.04 00:28:04.835 clat percentiles (msec): 00:28:04.835 | 1.00th=[ 83], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 114], 00:28:04.835 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 142], 60.00th=[ 150], 00:28:04.835 | 70.00th=[ 153], 80.00th=[ 176], 90.00th=[ 215], 95.00th=[ 228], 00:28:04.835 | 99.00th=[ 251], 99.50th=[ 321], 99.90th=[ 363], 99.95th=[ 368], 00:28:04.835 | 99.99th=[ 376] 00:28:04.835 bw ( KiB/s): min=73580, max=145408, per=6.85%, avg=110459.60, stdev=27403.03, samples=20 00:28:04.835 iops : min= 287, max= 568, avg=431.35, stdev=107.10, samples=20 00:28:04.835 lat (msec) : 20=0.09%, 50=0.37%, 100=0.75%, 250=97.74%, 500=1.05% 00:28:04.835 cpu : usr=1.12%, sys=1.23%, ctx=4442, majf=0, minf=1 00:28:04.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:04.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.835 issued rwts: total=0,4380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.835 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.835 job4: (groupid=0, jobs=1): err= 0: pid=96592: Fri Apr 26 21:30:52 2024 00:28:04.835 write: IOPS=1079, BW=270MiB/s (283MB/s)(2730MiB/10118msec); 0 zone resets 00:28:04.835 slat (usec): min=19, max=19053, avg=905.65, stdev=1736.54 00:28:04.835 clat (msec): min=3, max=251, avg=58.36, stdev=27.88 00:28:04.835 lat (msec): min=3, max=251, avg=59.27, stdev=28.26 00:28:04.835 clat percentiles (msec): 00:28:04.835 | 1.00th=[ 37], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:28:04.835 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 45], 00:28:04.835 | 70.00th=[ 75], 80.00th=[ 79], 90.00th=[ 83], 95.00th=[ 108], 00:28:04.835 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 226], 99.95th=[ 243], 00:28:04.835 | 99.99th=[ 251] 00:28:04.835 bw ( KiB/s): min=104750, max=407552, per=17.24%, avg=277988.90, stdev=109204.56, samples=20 00:28:04.835 iops : min= 409, max= 1592, avg=1085.85, stdev=426.61, samples=20 00:28:04.835 lat (msec) : 4=0.02%, 10=0.04%, 20=0.25%, 50=61.72%, 100=30.36% 00:28:04.835 lat (msec) : 250=7.60%, 500=0.02% 00:28:04.835 cpu : usr=2.51%, sys=3.44%, ctx=13400, majf=0, minf=1 00:28:04.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:28:04.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.835 issued rwts: total=0,10920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.835 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.835 job5: (groupid=0, jobs=1): err= 0: pid=96598: Fri Apr 26 21:30:52 2024 00:28:04.835 write: IOPS=563, BW=141MiB/s (148MB/s)(1413MiB/10034msec); 0 zone resets 00:28:04.835 slat (usec): min=24, max=68532, avg=1723.09, stdev=3627.72 00:28:04.835 clat (msec): min=2, max=203, avg=111.87, stdev=58.32 00:28:04.835 lat (msec): min=2, max=205, avg=113.59, stdev=59.21 00:28:04.835 clat percentiles (msec): 00:28:04.835 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 37], 20.00th=[ 39], 00:28:04.835 | 30.00th=[ 70], 40.00th=[ 78], 50.00th=[ 124], 60.00th=[ 148], 00:28:04.835 | 70.00th=[ 153], 80.00th=[ 163], 90.00th=[ 188], 95.00th=[ 190], 00:28:04.835 | 99.00th=[ 201], 99.50th=[ 201], 99.90th=[ 203], 99.95th=[ 203], 00:28:04.835 | 99.99th=[ 203] 00:28:04.835 bw ( KiB/s): min=83968, max=462435, per=8.87%, avg=143042.60, stdev=100900.34, samples=20 00:28:04.835 iops : min= 328, max= 1806, avg=558.70, stdev=394.09, samples=20 00:28:04.835 lat (msec) : 4=0.05%, 20=2.35%, 50=22.84%, 100=16.15%, 250=58.60% 00:28:04.835 cpu : usr=1.56%, sys=2.13%, ctx=7375, majf=0, minf=1 00:28:04.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:04.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.835 issued rwts: total=0,5652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.835 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.835 job6: (groupid=0, jobs=1): err= 0: pid=96600: Fri Apr 26 21:30:52 2024 00:28:04.835 write: IOPS=543, BW=136MiB/s (142MB/s)(1382MiB/10174msec); 0 zone resets 00:28:04.835 slat (usec): min=21, max=34588, avg=1631.20, stdev=3415.27 00:28:04.835 clat (msec): min=4, max=350, avg=116.07, stdev=58.24 00:28:04.835 lat (msec): min=4, max=350, avg=117.70, stdev=58.91 00:28:04.835 clat percentiles (msec): 00:28:04.835 | 1.00th=[ 13], 5.00th=[ 52], 10.00th=[ 73], 20.00th=[ 75], 00:28:04.835 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 80], 60.00th=[ 81], 00:28:04.835 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 201], 00:28:04.835 | 99.00th=[ 224], 99.50th=[ 268], 99.90th=[ 338], 99.95th=[ 338], 00:28:04.835 | 99.99th=[ 351] 00:28:04.835 bw ( KiB/s): min=78336, max=229940, per=8.67%, avg=139889.45, stdev=59937.19, samples=20 00:28:04.835 iops : min= 306, max= 898, avg=546.40, stdev=234.15, samples=20 00:28:04.835 lat (msec) : 10=0.29%, 20=1.34%, 50=3.15%, 100=57.57%, 250=37.04% 00:28:04.835 lat (msec) : 500=0.61% 00:28:04.835 cpu : usr=1.45%, sys=1.92%, ctx=7294, majf=0, minf=1 00:28:04.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:04.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.836 issued rwts: total=0,5529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.836 job7: (groupid=0, jobs=1): err= 0: pid=96601: Fri Apr 26 21:30:52 2024 00:28:04.836 write: IOPS=525, BW=131MiB/s (138MB/s)(1335MiB/10169msec); 0 zone resets 00:28:04.836 slat (usec): min=23, max=39441, avg=1857.45, stdev=3644.08 00:28:04.836 clat (msec): min=15, max=366, avg=119.99, stdev=53.27 00:28:04.836 lat (msec): min=15, max=366, avg=121.85, stdev=53.98 00:28:04.836 clat percentiles (msec): 00:28:04.836 | 1.00th=[ 56], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 77], 00:28:04.836 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 142], 00:28:04.836 | 70.00th=[ 150], 80.00th=[ 153], 90.00th=[ 209], 95.00th=[ 220], 00:28:04.836 | 99.00th=[ 232], 99.50th=[ 266], 99.90th=[ 351], 99.95th=[ 351], 00:28:04.836 | 99.99th=[ 368] 00:28:04.836 bw ( KiB/s): min=73580, max=213504, per=8.37%, avg=135041.70, stdev=56175.55, samples=20 00:28:04.836 iops : min= 287, max= 834, avg=527.35, stdev=219.48, samples=20 00:28:04.836 lat (msec) : 20=0.15%, 50=0.62%, 100=54.32%, 250=44.28%, 500=0.64% 00:28:04.836 cpu : usr=1.33%, sys=1.92%, ctx=6776, majf=0, minf=1 00:28:04.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:04.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.836 issued rwts: total=0,5339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.836 job8: (groupid=0, jobs=1): err= 0: pid=96602: Fri Apr 26 21:30:52 2024 00:28:04.836 write: IOPS=554, BW=139MiB/s (145MB/s)(1404MiB/10119msec); 0 zone resets 00:28:04.836 slat (usec): min=20, max=56826, avg=1739.91, stdev=3586.23 00:28:04.836 clat (msec): min=5, max=248, avg=113.56, stdev=59.31 00:28:04.836 lat (msec): min=5, max=248, avg=115.30, stdev=60.17 00:28:04.836 clat percentiles (msec): 00:28:04.836 | 1.00th=[ 18], 5.00th=[ 38], 10.00th=[ 39], 20.00th=[ 40], 00:28:04.836 | 30.00th=[ 42], 40.00th=[ 107], 50.00th=[ 142], 60.00th=[ 148], 00:28:04.836 | 70.00th=[ 153], 80.00th=[ 171], 90.00th=[ 186], 95.00th=[ 190], 00:28:04.836 | 99.00th=[ 197], 99.50th=[ 199], 99.90th=[ 241], 99.95th=[ 241], 00:28:04.836 | 99.99th=[ 249] 00:28:04.836 bw ( KiB/s): min=85844, max=410826, per=8.80%, avg=141999.95, stdev=95517.44, samples=20 00:28:04.836 iops : min= 335, max= 1604, avg=554.60, stdev=372.96, samples=20 00:28:04.836 lat (msec) : 10=0.25%, 20=1.14%, 50=32.58%, 100=2.74%, 250=63.29% 00:28:04.836 cpu : usr=1.37%, sys=1.83%, ctx=7822, majf=0, minf=1 00:28:04.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:28:04.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.836 issued rwts: total=0,5614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.836 job9: (groupid=0, jobs=1): err= 0: pid=96603: Fri Apr 26 21:30:52 2024 00:28:04.836 write: IOPS=435, BW=109MiB/s (114MB/s)(1101MiB/10114msec); 0 zone resets 00:28:04.836 slat (usec): min=21, max=51652, avg=2230.56, stdev=4075.65 00:28:04.836 clat (msec): min=8, max=250, avg=144.70, stdev=33.65 00:28:04.836 lat (msec): min=8, max=250, avg=146.93, stdev=34.05 00:28:04.836 clat percentiles (msec): 00:28:04.836 | 1.00th=[ 44], 5.00th=[ 79], 10.00th=[ 105], 20.00th=[ 113], 00:28:04.836 | 30.00th=[ 138], 40.00th=[ 144], 50.00th=[ 153], 60.00th=[ 155], 00:28:04.836 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 188], 00:28:04.836 | 99.00th=[ 190], 99.50th=[ 199], 99.90th=[ 243], 99.95th=[ 243], 00:28:04.836 | 99.99th=[ 251] 00:28:04.836 bw ( KiB/s): min=86016, max=200704, per=6.89%, avg=111131.90, stdev=27786.42, samples=20 00:28:04.836 iops : min= 336, max= 784, avg=434.00, stdev=108.55, samples=20 00:28:04.836 lat (msec) : 10=0.05%, 20=0.14%, 50=1.57%, 100=6.27%, 250=91.96% 00:28:04.836 lat (msec) : 500=0.02% 00:28:04.836 cpu : usr=1.16%, sys=1.39%, ctx=5912, majf=0, minf=1 00:28:04.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:04.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.836 issued rwts: total=0,4404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.836 job10: (groupid=0, jobs=1): err= 0: pid=96604: Fri Apr 26 21:30:52 2024 00:28:04.836 write: IOPS=431, BW=108MiB/s (113MB/s)(1098MiB/10176msec); 0 zone resets 00:28:04.836 slat (usec): min=22, max=53080, avg=2273.31, stdev=4192.57 00:28:04.836 clat (msec): min=4, max=374, avg=145.91, stdev=42.77 00:28:04.836 lat (msec): min=4, max=374, avg=148.19, stdev=43.23 00:28:04.836 clat percentiles (msec): 00:28:04.836 | 1.00th=[ 73], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 114], 00:28:04.836 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 142], 60.00th=[ 150], 00:28:04.836 | 70.00th=[ 153], 80.00th=[ 180], 90.00th=[ 218], 95.00th=[ 230], 00:28:04.836 | 99.00th=[ 259], 99.50th=[ 296], 99.90th=[ 363], 99.95th=[ 363], 00:28:04.836 | 99.99th=[ 376] 00:28:04.836 bw ( KiB/s): min=69493, max=144095, per=6.87%, avg=110785.55, stdev=27581.71, samples=20 00:28:04.836 iops : min= 271, max= 562, avg=432.65, stdev=107.74, samples=20 00:28:04.836 lat (msec) : 10=0.18%, 20=0.09%, 50=0.36%, 100=0.73%, 250=96.99% 00:28:04.836 lat (msec) : 500=1.64% 00:28:04.836 cpu : usr=1.19%, sys=1.70%, ctx=6041, majf=0, minf=1 00:28:04.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:28:04.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:04.836 issued rwts: total=0,4392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:04.836 00:28:04.836 Run status group 0 (all jobs): 00:28:04.836 WRITE: bw=1575MiB/s (1652MB/s), 107MiB/s-270MiB/s (113MB/s-283MB/s), io=15.7GiB (16.8GB), run=10034-10189msec 00:28:04.836 00:28:04.836 Disk stats (read/write): 00:28:04.836 nvme0n1: ios=49/9084, merge=0/0, ticks=37/1212161, in_queue=1212198, util=97.76% 00:28:04.836 nvme10n1: ios=49/8617, merge=0/0, ticks=45/1210837, in_queue=1210882, util=98.01% 00:28:04.836 nvme1n1: ios=46/17811, merge=0/0, ticks=37/1208324, in_queue=1208361, util=97.91% 00:28:04.836 nvme2n1: ios=40/8624, merge=0/0, ticks=39/1206095, in_queue=1206134, util=98.00% 00:28:04.836 nvme3n1: ios=33/21701, merge=0/0, ticks=39/1211086, in_queue=1211125, util=98.01% 00:28:04.836 nvme4n1: ios=15/11125, merge=0/0, ticks=8/1219719, in_queue=1219727, util=98.04% 00:28:04.836 nvme5n1: ios=0/10921, merge=0/0, ticks=0/1214232, in_queue=1214232, util=98.16% 00:28:04.836 nvme6n1: ios=0/10540, merge=0/0, ticks=0/1208604, in_queue=1208604, util=98.26% 00:28:04.836 nvme7n1: ios=0/11085, merge=0/0, ticks=0/1213003, in_queue=1213003, util=98.61% 00:28:04.836 nvme8n1: ios=0/8665, merge=0/0, ticks=0/1212773, in_queue=1212773, util=98.70% 00:28:04.836 nvme9n1: ios=0/8657, merge=0/0, ticks=0/1210435, in_queue=1210435, util=98.98% 00:28:04.836 21:30:52 -- target/multiconnection.sh@36 -- # sync 00:28:04.836 21:30:52 -- target/multiconnection.sh@37 -- # seq 1 11 00:28:04.836 21:30:52 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.836 21:30:52 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:04.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:04.836 21:30:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:04.836 21:30:53 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.836 21:30:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:28:04.836 21:30:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.836 21:30:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:28:04.836 21:30:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.836 21:30:53 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.836 21:30:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.836 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.836 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:04.836 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.836 21:30:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.836 21:30:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:04.836 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:04.836 21:30:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:04.836 21:30:53 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.836 21:30:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.836 21:30:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:28:04.836 21:30:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.836 21:30:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:28:04.836 21:30:53 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.836 21:30:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:04.836 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.836 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:04.836 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.836 21:30:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.836 21:30:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:04.836 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:04.836 21:30:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:04.836 21:30:53 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.836 21:30:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:28:04.836 21:30:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.836 21:30:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:28:04.836 21:30:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.836 21:30:53 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.836 21:30:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:04.836 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.836 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:04.836 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.836 21:30:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.836 21:30:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:04.836 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:04.836 21:30:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:04.836 21:30:53 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.836 21:30:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.836 21:30:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:28:04.836 21:30:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.836 21:30:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:28:04.836 21:30:53 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.836 21:30:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:04.836 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.836 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:04.836 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.836 21:30:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.836 21:30:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:04.837 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:04.837 21:30:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:04.837 21:30:53 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:28:04.837 21:30:53 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.837 21:30:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:04.837 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.837 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:04.837 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.837 21:30:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.837 21:30:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:04.837 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:04.837 21:30:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:04.837 21:30:53 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:28:04.837 21:30:53 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.837 21:30:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:04.837 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.837 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:04.837 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.837 21:30:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.837 21:30:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:04.837 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:04.837 21:30:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:04.837 21:30:53 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:28:04.837 21:30:53 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.837 21:30:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:04.837 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.837 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:04.837 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.837 21:30:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.837 21:30:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:04.837 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:04.837 21:30:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:04.837 21:30:53 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.837 21:30:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:04.837 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.837 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:04.837 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.837 21:30:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.837 21:30:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:04.837 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:04.837 21:30:53 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:04.837 21:30:53 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:28:04.837 21:30:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.837 21:30:53 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.837 21:30:53 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:04.837 21:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.837 21:30:53 -- common/autotest_common.sh@10 -- # set +x 00:28:04.837 21:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.837 21:30:53 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.837 21:30:53 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:04.837 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:04.837 21:30:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:04.837 21:30:54 -- common/autotest_common.sh@1205 -- # local i=0 00:28:04.837 21:30:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:28:04.837 21:30:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:04.837 21:30:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:28:04.837 21:30:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:04.837 21:30:54 -- common/autotest_common.sh@1217 -- # return 0 00:28:04.837 21:30:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:04.837 21:30:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.837 21:30:54 -- common/autotest_common.sh@10 -- # set +x 00:28:04.837 21:30:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.837 21:30:54 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:04.837 21:30:54 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:05.095 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:05.095 21:30:54 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:05.095 21:30:54 -- common/autotest_common.sh@1205 -- # local i=0 00:28:05.095 21:30:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:28:05.095 21:30:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:28:05.095 21:30:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:28:05.095 21:30:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:28:05.095 21:30:54 -- common/autotest_common.sh@1217 -- # return 0 00:28:05.095 21:30:54 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:05.095 21:30:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.095 21:30:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.095 21:30:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.095 21:30:54 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:05.095 21:30:54 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:05.095 21:30:54 -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:05.095 21:30:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:05.095 21:30:54 -- nvmf/common.sh@117 -- # sync 00:28:05.095 21:30:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.095 21:30:54 -- nvmf/common.sh@120 -- # set +e 00:28:05.095 21:30:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.095 21:30:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.095 rmmod nvme_tcp 00:28:05.095 rmmod nvme_fabrics 00:28:05.095 rmmod nvme_keyring 00:28:05.095 21:30:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.095 21:30:54 -- nvmf/common.sh@124 -- # set -e 00:28:05.095 21:30:54 -- nvmf/common.sh@125 -- # return 0 00:28:05.095 21:30:54 -- nvmf/common.sh@478 -- # '[' -n 95893 ']' 00:28:05.095 21:30:54 -- nvmf/common.sh@479 -- # killprocess 95893 00:28:05.095 21:30:54 -- common/autotest_common.sh@936 -- # '[' -z 95893 ']' 00:28:05.096 21:30:54 -- common/autotest_common.sh@940 -- # kill -0 95893 00:28:05.096 21:30:54 -- common/autotest_common.sh@941 -- # uname 00:28:05.096 21:30:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:05.096 21:30:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95893 00:28:05.096 killing process with pid 95893 00:28:05.096 21:30:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:05.096 21:30:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:05.096 21:30:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95893' 00:28:05.096 21:30:54 -- common/autotest_common.sh@955 -- # kill 95893 00:28:05.096 21:30:54 -- common/autotest_common.sh@960 -- # wait 95893 00:28:05.662 21:30:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:05.662 21:30:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:05.662 21:30:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:05.662 21:30:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.662 21:30:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.662 21:30:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.662 21:30:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.662 21:30:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.662 21:30:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:05.662 00:28:05.662 real 0m49.837s 00:28:05.662 user 2m52.356s 00:28:05.662 sys 0m23.962s 00:28:05.662 21:30:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:05.662 21:30:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.662 ************************************ 00:28:05.662 END TEST nvmf_multiconnection 00:28:05.662 ************************************ 00:28:05.662 21:30:54 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:05.662 21:30:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:05.662 21:30:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:05.662 21:30:54 -- common/autotest_common.sh@10 -- # set +x 00:28:05.946 ************************************ 00:28:05.946 START TEST nvmf_initiator_timeout 00:28:05.946 ************************************ 00:28:05.946 21:30:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:05.946 * Looking for test storage... 00:28:05.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:05.946 21:30:55 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:05.946 21:30:55 -- nvmf/common.sh@7 -- # uname -s 00:28:05.946 21:30:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.946 21:30:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.946 21:30:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.946 21:30:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.946 21:30:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.946 21:30:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.946 21:30:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.946 21:30:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.946 21:30:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.946 21:30:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.946 21:30:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:28:05.946 21:30:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:28:05.946 21:30:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.946 21:30:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.946 21:30:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:05.946 21:30:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.946 21:30:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:05.946 21:30:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.946 21:30:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.946 21:30:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.946 21:30:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.946 21:30:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.946 21:30:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.946 21:30:55 -- paths/export.sh@5 -- # export PATH 00:28:05.946 21:30:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.946 21:30:55 -- nvmf/common.sh@47 -- # : 0 00:28:05.946 21:30:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:05.946 21:30:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:05.946 21:30:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.946 21:30:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.946 21:30:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.946 21:30:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:05.946 21:30:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:05.946 21:30:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:05.946 21:30:55 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:05.946 21:30:55 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:05.946 21:30:55 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:05.946 21:30:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:05.946 21:30:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.946 21:30:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:05.946 21:30:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:05.946 21:30:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:05.946 21:30:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.946 21:30:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.946 21:30:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.946 21:30:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:05.946 21:30:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:05.946 21:30:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:05.946 21:30:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:05.946 21:30:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:05.946 21:30:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:05.946 21:30:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.946 21:30:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.946 21:30:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:05.946 21:30:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:05.946 21:30:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:05.946 21:30:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:05.946 21:30:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:05.946 21:30:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.946 21:30:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:05.946 21:30:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:05.946 21:30:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:05.946 21:30:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:05.946 21:30:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:05.946 21:30:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:05.946 Cannot find device "nvmf_tgt_br" 00:28:05.946 21:30:55 -- nvmf/common.sh@155 -- # true 00:28:05.946 21:30:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:05.946 Cannot find device "nvmf_tgt_br2" 00:28:05.946 21:30:55 -- nvmf/common.sh@156 -- # true 00:28:05.946 21:30:55 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:05.946 21:30:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:05.946 Cannot find device "nvmf_tgt_br" 00:28:05.946 21:30:55 -- nvmf/common.sh@158 -- # true 00:28:05.946 21:30:55 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:05.946 Cannot find device "nvmf_tgt_br2" 00:28:05.946 21:30:55 -- nvmf/common.sh@159 -- # true 00:28:05.946 21:30:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:06.205 21:30:55 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:06.205 21:30:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:06.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:06.205 21:30:55 -- nvmf/common.sh@162 -- # true 00:28:06.205 21:30:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:06.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:06.205 21:30:55 -- nvmf/common.sh@163 -- # true 00:28:06.205 21:30:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:06.205 21:30:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:06.205 21:30:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:06.205 21:30:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:06.205 21:30:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:06.205 21:30:55 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:06.205 21:30:55 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:06.205 21:30:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:06.205 21:30:55 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:06.205 21:30:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:06.205 21:30:55 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:06.205 21:30:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:06.205 21:30:55 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:06.205 21:30:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:06.205 21:30:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:06.205 21:30:55 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:06.205 21:30:55 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:06.205 21:30:55 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:06.205 21:30:55 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:06.205 21:30:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:06.205 21:30:55 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:06.205 21:30:55 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:06.205 21:30:55 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:06.205 21:30:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:06.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:28:06.205 00:28:06.205 --- 10.0.0.2 ping statistics --- 00:28:06.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.205 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:06.205 21:30:55 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:06.205 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:06.205 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:28:06.205 00:28:06.205 --- 10.0.0.3 ping statistics --- 00:28:06.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.205 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:28:06.205 21:30:55 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:06.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:28:06.205 00:28:06.205 --- 10.0.0.1 ping statistics --- 00:28:06.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.205 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:28:06.205 21:30:55 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.205 21:30:55 -- nvmf/common.sh@422 -- # return 0 00:28:06.205 21:30:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:06.205 21:30:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.205 21:30:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:06.205 21:30:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:06.205 21:30:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.205 21:30:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:06.205 21:30:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:06.205 21:30:55 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:06.205 21:30:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:06.205 21:30:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:06.205 21:30:55 -- common/autotest_common.sh@10 -- # set +x 00:28:06.464 21:30:55 -- nvmf/common.sh@470 -- # nvmfpid=96977 00:28:06.464 21:30:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:06.464 21:30:55 -- nvmf/common.sh@471 -- # waitforlisten 96977 00:28:06.464 21:30:55 -- common/autotest_common.sh@817 -- # '[' -z 96977 ']' 00:28:06.464 21:30:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.464 21:30:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:06.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.464 21:30:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.464 21:30:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:06.464 21:30:55 -- common/autotest_common.sh@10 -- # set +x 00:28:06.464 [2024-04-26 21:30:55.496360] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:06.464 [2024-04-26 21:30:55.496425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.464 [2024-04-26 21:30:55.636462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.464 [2024-04-26 21:30:55.688878] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.464 [2024-04-26 21:30:55.688931] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.464 [2024-04-26 21:30:55.688937] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.464 [2024-04-26 21:30:55.688942] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.464 [2024-04-26 21:30:55.688947] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.464 [2024-04-26 21:30:55.689281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.464 [2024-04-26 21:30:55.689661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.464 [2024-04-26 21:30:55.689472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.464 [2024-04-26 21:30:55.689665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:07.401 21:30:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:07.401 21:30:56 -- common/autotest_common.sh@850 -- # return 0 00:28:07.401 21:30:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:07.401 21:30:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:07.401 21:30:56 -- common/autotest_common.sh@10 -- # set +x 00:28:07.401 21:30:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.401 21:30:56 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:07.401 21:30:56 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:07.401 21:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.401 21:30:56 -- common/autotest_common.sh@10 -- # set +x 00:28:07.401 Malloc0 00:28:07.401 21:30:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.401 21:30:56 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:07.401 21:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.401 21:30:56 -- common/autotest_common.sh@10 -- # set +x 00:28:07.401 Delay0 00:28:07.401 21:30:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.401 21:30:56 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.401 21:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.401 21:30:56 -- common/autotest_common.sh@10 -- # set +x 00:28:07.401 [2024-04-26 21:30:56.523543] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.401 21:30:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.401 21:30:56 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:07.401 21:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.401 21:30:56 -- common/autotest_common.sh@10 -- # set +x 00:28:07.401 21:30:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.401 21:30:56 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.401 21:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.401 21:30:56 -- common/autotest_common.sh@10 -- # set +x 00:28:07.401 21:30:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.401 21:30:56 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.401 21:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.401 21:30:56 -- common/autotest_common.sh@10 -- # set +x 00:28:07.401 [2024-04-26 21:30:56.551645] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:07.401 21:30:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.402 21:30:56 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:07.660 21:30:56 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:07.660 21:30:56 -- common/autotest_common.sh@1184 -- # local i=0 00:28:07.660 21:30:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:28:07.660 21:30:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:28:07.660 21:30:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:28:09.617 21:30:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:28:09.617 21:30:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:28:09.617 21:30:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:28:09.617 21:30:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:28:09.617 21:30:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:28:09.617 21:30:58 -- common/autotest_common.sh@1194 -- # return 0 00:28:09.617 21:30:58 -- target/initiator_timeout.sh@35 -- # fio_pid=97060 00:28:09.617 21:30:58 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:09.617 21:30:58 -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:09.617 [global] 00:28:09.617 thread=1 00:28:09.617 invalidate=1 00:28:09.617 rw=write 00:28:09.617 time_based=1 00:28:09.617 runtime=60 00:28:09.617 ioengine=libaio 00:28:09.617 direct=1 00:28:09.617 bs=4096 00:28:09.617 iodepth=1 00:28:09.617 norandommap=0 00:28:09.617 numjobs=1 00:28:09.617 00:28:09.617 verify_dump=1 00:28:09.617 verify_backlog=512 00:28:09.617 verify_state_save=0 00:28:09.617 do_verify=1 00:28:09.617 verify=crc32c-intel 00:28:09.617 [job0] 00:28:09.617 filename=/dev/nvme0n1 00:28:09.617 Could not set queue depth (nvme0n1) 00:28:09.875 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:09.875 fio-3.35 00:28:09.875 Starting 1 thread 00:28:13.159 21:31:01 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:13.159 21:31:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.159 21:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:13.159 true 00:28:13.159 21:31:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.159 21:31:01 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:13.159 21:31:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.159 21:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:13.159 true 00:28:13.159 21:31:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.159 21:31:01 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:13.159 21:31:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.159 21:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:13.159 true 00:28:13.159 21:31:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.159 21:31:01 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:13.159 21:31:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.159 21:31:01 -- common/autotest_common.sh@10 -- # set +x 00:28:13.159 true 00:28:13.159 21:31:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.159 21:31:01 -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:15.694 21:31:04 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:15.694 21:31:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.694 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:28:15.694 true 00:28:15.694 21:31:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.694 21:31:04 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:15.694 21:31:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.694 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:28:15.694 true 00:28:15.694 21:31:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.694 21:31:04 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:15.694 21:31:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.694 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:28:15.694 true 00:28:15.694 21:31:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.694 21:31:04 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:15.694 21:31:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.694 21:31:04 -- common/autotest_common.sh@10 -- # set +x 00:28:15.694 true 00:28:15.694 21:31:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.694 21:31:04 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:15.694 21:31:04 -- target/initiator_timeout.sh@54 -- # wait 97060 00:29:11.967 00:29:11.967 job0: (groupid=0, jobs=1): err= 0: pid=97081: Fri Apr 26 21:31:59 2024 00:29:11.967 read: IOPS=1006, BW=4028KiB/s (4124kB/s)(236MiB/60000msec) 00:29:11.967 slat (usec): min=6, max=15008, avg=10.55, stdev=77.61 00:29:11.967 clat (usec): min=123, max=40499k, avg=834.18, stdev=164763.82 00:29:11.967 lat (usec): min=130, max=40499k, avg=844.73, stdev=164763.83 00:29:11.967 clat percentiles (usec): 00:29:11.967 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:29:11.967 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:29:11.967 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 186], 00:29:11.967 | 99.00th=[ 217], 99.50th=[ 229], 99.90th=[ 285], 99.95th=[ 314], 00:29:11.967 | 99.99th=[ 412] 00:29:11.967 write: IOPS=1009, BW=4039KiB/s (4136kB/s)(237MiB/60000msec); 0 zone resets 00:29:11.967 slat (usec): min=8, max=890, avg=14.99, stdev= 6.21 00:29:11.967 clat (usec): min=85, max=1321, avg=130.74, stdev=14.29 00:29:11.967 lat (usec): min=110, max=1357, avg=145.74, stdev=16.48 00:29:11.967 clat percentiles (usec): 00:29:11.967 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 124], 00:29:11.967 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:29:11.967 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:29:11.967 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 233], 99.95th=[ 260], 00:29:11.967 | 99.99th=[ 635] 00:29:11.967 bw ( KiB/s): min= 4096, max=14232, per=100.00%, avg=12182.97, stdev=1732.55, samples=39 00:29:11.967 iops : min= 1024, max= 3558, avg=3045.74, stdev=433.14, samples=39 00:29:11.967 lat (usec) : 100=0.01%, 250=99.87%, 500=0.11%, 750=0.01%, 1000=0.01% 00:29:11.967 lat (msec) : 2=0.01%, 50=0.01%, >=2000=0.01% 00:29:11.967 cpu : usr=0.41%, sys=1.84%, ctx=121024, majf=0, minf=2 00:29:11.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.967 issued rwts: total=60416,60590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:11.967 00:29:11.967 Run status group 0 (all jobs): 00:29:11.967 READ: bw=4028KiB/s (4124kB/s), 4028KiB/s-4028KiB/s (4124kB/s-4124kB/s), io=236MiB (247MB), run=60000-60000msec 00:29:11.967 WRITE: bw=4039KiB/s (4136kB/s), 4039KiB/s-4039KiB/s (4136kB/s-4136kB/s), io=237MiB (248MB), run=60000-60000msec 00:29:11.967 00:29:11.967 Disk stats (read/write): 00:29:11.967 nvme0n1: ios=60265/60416, merge=0/0, ticks=10284/8382, in_queue=18666, util=99.78% 00:29:11.967 21:31:59 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:11.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:11.967 21:31:59 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:11.967 21:31:59 -- common/autotest_common.sh@1205 -- # local i=0 00:29:11.967 21:31:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:29:11.967 21:31:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:11.967 21:31:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:11.967 21:31:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:29:11.967 21:31:59 -- common/autotest_common.sh@1217 -- # return 0 00:29:11.967 21:31:59 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:11.967 nvmf hotplug test: fio successful as expected 00:29:11.967 21:31:59 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:11.967 21:31:59 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:11.967 21:31:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:11.967 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:29:11.967 21:31:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:11.967 21:31:59 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:11.967 21:31:59 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:11.967 21:31:59 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:11.967 21:31:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:11.967 21:31:59 -- nvmf/common.sh@117 -- # sync 00:29:11.967 21:31:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:11.967 21:31:59 -- nvmf/common.sh@120 -- # set +e 00:29:11.967 21:31:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:11.967 21:31:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:11.967 rmmod nvme_tcp 00:29:11.967 rmmod nvme_fabrics 00:29:11.967 rmmod nvme_keyring 00:29:11.967 21:31:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:11.967 21:31:59 -- nvmf/common.sh@124 -- # set -e 00:29:11.967 21:31:59 -- nvmf/common.sh@125 -- # return 0 00:29:11.967 21:31:59 -- nvmf/common.sh@478 -- # '[' -n 96977 ']' 00:29:11.967 21:31:59 -- nvmf/common.sh@479 -- # killprocess 96977 00:29:11.967 21:31:59 -- common/autotest_common.sh@936 -- # '[' -z 96977 ']' 00:29:11.967 21:31:59 -- common/autotest_common.sh@940 -- # kill -0 96977 00:29:11.967 21:31:59 -- common/autotest_common.sh@941 -- # uname 00:29:11.967 21:31:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:11.967 21:31:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96977 00:29:11.967 killing process with pid 96977 00:29:11.967 21:31:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:11.967 21:31:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:11.967 21:31:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96977' 00:29:11.967 21:31:59 -- common/autotest_common.sh@955 -- # kill 96977 00:29:11.968 21:31:59 -- common/autotest_common.sh@960 -- # wait 96977 00:29:11.968 21:31:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:11.968 21:31:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:11.968 21:31:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:11.968 21:31:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:11.968 21:31:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:11.968 21:31:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.968 21:31:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:11.968 21:31:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.968 21:31:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:11.968 00:29:11.968 real 1m4.568s 00:29:11.968 user 4m8.596s 00:29:11.968 sys 0m6.678s 00:29:11.968 21:31:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:11.968 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:29:11.968 ************************************ 00:29:11.968 END TEST nvmf_initiator_timeout 00:29:11.968 ************************************ 00:29:11.968 21:31:59 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:29:11.968 21:31:59 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:29:11.968 21:31:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:11.968 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:29:11.968 21:31:59 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:29:11.968 21:31:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:11.968 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:29:11.968 21:31:59 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:29:11.968 21:31:59 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:11.968 21:31:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:11.968 21:31:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:11.968 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:29:11.968 ************************************ 00:29:11.968 START TEST nvmf_multicontroller 00:29:11.968 ************************************ 00:29:11.968 21:31:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:11.968 * Looking for test storage... 00:29:11.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:11.968 21:31:59 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:11.968 21:31:59 -- nvmf/common.sh@7 -- # uname -s 00:29:11.968 21:31:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.968 21:31:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.968 21:31:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.968 21:31:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.968 21:31:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.968 21:31:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.968 21:31:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.968 21:31:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.968 21:31:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.968 21:31:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.968 21:31:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:29:11.968 21:31:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:29:11.968 21:31:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.968 21:31:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.968 21:31:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:11.968 21:31:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.968 21:31:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:11.968 21:31:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.968 21:31:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.968 21:31:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.968 21:31:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.968 21:31:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.968 21:31:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.968 21:31:59 -- paths/export.sh@5 -- # export PATH 00:29:11.968 21:31:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.968 21:31:59 -- nvmf/common.sh@47 -- # : 0 00:29:11.968 21:31:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.968 21:31:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.968 21:31:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.968 21:31:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.968 21:31:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.968 21:31:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.968 21:31:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.968 21:31:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.968 21:31:59 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:11.968 21:31:59 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:11.968 21:31:59 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:11.968 21:31:59 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:11.968 21:31:59 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:11.968 21:31:59 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:11.968 21:31:59 -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:11.968 21:31:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:11.968 21:31:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.968 21:31:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:11.969 21:31:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:11.969 21:31:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:11.969 21:31:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.969 21:31:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:11.969 21:31:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.969 21:31:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:11.969 21:31:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:11.969 21:31:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:11.969 21:31:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:11.969 21:31:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:11.969 21:31:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:11.969 21:31:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.969 21:31:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.969 21:31:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:11.969 21:31:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:11.969 21:31:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:11.969 21:31:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:11.969 21:31:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:11.969 21:31:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.969 21:31:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:11.969 21:31:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:11.969 21:31:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:11.969 21:31:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:11.969 21:31:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:11.969 21:31:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:11.969 Cannot find device "nvmf_tgt_br" 00:29:11.969 21:31:59 -- nvmf/common.sh@155 -- # true 00:29:11.969 21:31:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:11.969 Cannot find device "nvmf_tgt_br2" 00:29:11.969 21:31:59 -- nvmf/common.sh@156 -- # true 00:29:11.969 21:31:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:11.969 21:31:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:11.969 Cannot find device "nvmf_tgt_br" 00:29:11.969 21:31:59 -- nvmf/common.sh@158 -- # true 00:29:11.969 21:31:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:11.969 Cannot find device "nvmf_tgt_br2" 00:29:11.969 21:32:00 -- nvmf/common.sh@159 -- # true 00:29:11.969 21:32:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:11.969 21:32:00 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:11.969 21:32:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:11.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:11.969 21:32:00 -- nvmf/common.sh@162 -- # true 00:29:11.969 21:32:00 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:11.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:11.969 21:32:00 -- nvmf/common.sh@163 -- # true 00:29:11.969 21:32:00 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:11.969 21:32:00 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:11.969 21:32:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:11.969 21:32:00 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:11.969 21:32:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:11.969 21:32:00 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:11.969 21:32:00 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:11.969 21:32:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:11.969 21:32:00 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:11.969 21:32:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:11.969 21:32:00 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:11.969 21:32:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:11.969 21:32:00 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:11.969 21:32:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:11.969 21:32:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:11.969 21:32:00 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:11.969 21:32:00 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:11.969 21:32:00 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:11.969 21:32:00 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:11.969 21:32:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:11.969 21:32:00 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:11.969 21:32:00 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:11.969 21:32:00 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:11.969 21:32:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:11.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:29:11.969 00:29:11.969 --- 10.0.0.2 ping statistics --- 00:29:11.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.969 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:29:11.969 21:32:00 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:11.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:11.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:29:11.969 00:29:11.969 --- 10.0.0.3 ping statistics --- 00:29:11.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.969 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:29:11.969 21:32:00 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:11.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:29:11.969 00:29:11.969 --- 10.0.0.1 ping statistics --- 00:29:11.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.969 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:29:11.969 21:32:00 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.969 21:32:00 -- nvmf/common.sh@422 -- # return 0 00:29:11.969 21:32:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:11.969 21:32:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.969 21:32:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:11.969 21:32:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:11.969 21:32:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.969 21:32:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:11.969 21:32:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:11.969 21:32:00 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:11.969 21:32:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:11.970 21:32:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:11.970 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.970 21:32:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:11.970 21:32:00 -- nvmf/common.sh@470 -- # nvmfpid=97926 00:29:11.970 21:32:00 -- nvmf/common.sh@471 -- # waitforlisten 97926 00:29:11.970 21:32:00 -- common/autotest_common.sh@817 -- # '[' -z 97926 ']' 00:29:11.970 21:32:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.970 21:32:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:11.970 21:32:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.970 21:32:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:11.970 21:32:00 -- common/autotest_common.sh@10 -- # set +x 00:29:11.970 [2024-04-26 21:32:00.288004] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:11.970 [2024-04-26 21:32:00.288080] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.970 [2024-04-26 21:32:00.430812] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:11.970 [2024-04-26 21:32:00.484046] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.970 [2024-04-26 21:32:00.484098] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.970 [2024-04-26 21:32:00.484105] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.970 [2024-04-26 21:32:00.484111] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.970 [2024-04-26 21:32:00.484115] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.970 [2024-04-26 21:32:00.484414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.970 [2024-04-26 21:32:00.484579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.970 [2024-04-26 21:32:00.484585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.970 21:32:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:11.970 21:32:01 -- common/autotest_common.sh@850 -- # return 0 00:29:11.970 21:32:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:11.970 21:32:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:11.970 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 21:32:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.230 21:32:01 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 [2024-04-26 21:32:01.270551] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 Malloc0 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 [2024-04-26 21:32:01.338149] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 [2024-04-26 21:32:01.350099] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 Malloc1 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:12.230 21:32:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.230 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:12.230 21:32:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.230 21:32:01 -- host/multicontroller.sh@44 -- # bdevperf_pid=97978 00:29:12.231 21:32:01 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:12.231 21:32:01 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:12.231 21:32:01 -- host/multicontroller.sh@47 -- # waitforlisten 97978 /var/tmp/bdevperf.sock 00:29:12.231 21:32:01 -- common/autotest_common.sh@817 -- # '[' -z 97978 ']' 00:29:12.231 21:32:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:12.231 21:32:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:12.231 21:32:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:12.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:12.231 21:32:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:12.231 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:29:13.167 21:32:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:13.167 21:32:02 -- common/autotest_common.sh@850 -- # return 0 00:29:13.167 21:32:02 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:13.167 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.167 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.426 NVMe0n1 00:29:13.426 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.426 21:32:02 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:13.426 21:32:02 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:13.426 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.426 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.426 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.426 1 00:29:13.426 21:32:02 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:13.426 21:32:02 -- common/autotest_common.sh@638 -- # local es=0 00:29:13.426 21:32:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:13.426 21:32:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:13.426 21:32:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.426 21:32:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:13.426 21:32:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.426 21:32:02 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:13.426 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.426 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.426 2024/04/26 21:32:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:29:13.426 request: 00:29:13.426 { 00:29:13.426 "method": "bdev_nvme_attach_controller", 00:29:13.426 "params": { 00:29:13.426 "name": "NVMe0", 00:29:13.426 "trtype": "tcp", 00:29:13.426 "traddr": "10.0.0.2", 00:29:13.426 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:13.426 "hostaddr": "10.0.0.2", 00:29:13.426 "hostsvcid": "60000", 00:29:13.426 "adrfam": "ipv4", 00:29:13.426 "trsvcid": "4420", 00:29:13.426 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:29:13.426 } 00:29:13.426 } 00:29:13.426 Got JSON-RPC error response 00:29:13.426 GoRPCClient: error on JSON-RPC call 00:29:13.426 21:32:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:13.426 21:32:02 -- common/autotest_common.sh@641 -- # es=1 00:29:13.426 21:32:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:13.426 21:32:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:13.426 21:32:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:13.426 21:32:02 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:13.426 21:32:02 -- common/autotest_common.sh@638 -- # local es=0 00:29:13.426 21:32:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:13.426 21:32:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:13.426 21:32:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.426 21:32:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:13.426 21:32:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.426 21:32:02 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:13.426 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.426 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.426 2024/04/26 21:32:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:29:13.426 request: 00:29:13.426 { 00:29:13.426 "method": "bdev_nvme_attach_controller", 00:29:13.426 "params": { 00:29:13.426 "name": "NVMe0", 00:29:13.426 "trtype": "tcp", 00:29:13.426 "traddr": "10.0.0.2", 00:29:13.426 "hostaddr": "10.0.0.2", 00:29:13.426 "hostsvcid": "60000", 00:29:13.426 "adrfam": "ipv4", 00:29:13.426 "trsvcid": "4420", 00:29:13.426 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:29:13.426 } 00:29:13.426 } 00:29:13.426 Got JSON-RPC error response 00:29:13.426 GoRPCClient: error on JSON-RPC call 00:29:13.426 21:32:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:13.426 21:32:02 -- common/autotest_common.sh@641 -- # es=1 00:29:13.426 21:32:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:13.426 21:32:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:13.426 21:32:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:13.426 21:32:02 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:13.427 21:32:02 -- common/autotest_common.sh@638 -- # local es=0 00:29:13.427 21:32:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:13.427 21:32:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:13.427 21:32:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.427 21:32:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:13.427 21:32:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.427 21:32:02 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:13.427 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.427 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.427 2024/04/26 21:32:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:29:13.427 request: 00:29:13.427 { 00:29:13.427 "method": "bdev_nvme_attach_controller", 00:29:13.427 "params": { 00:29:13.427 "name": "NVMe0", 00:29:13.427 "trtype": "tcp", 00:29:13.427 "traddr": "10.0.0.2", 00:29:13.427 "hostaddr": "10.0.0.2", 00:29:13.427 "hostsvcid": "60000", 00:29:13.427 "adrfam": "ipv4", 00:29:13.427 "trsvcid": "4420", 00:29:13.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.427 "multipath": "disable" 00:29:13.427 } 00:29:13.427 } 00:29:13.427 Got JSON-RPC error response 00:29:13.427 GoRPCClient: error on JSON-RPC call 00:29:13.427 21:32:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:13.427 21:32:02 -- common/autotest_common.sh@641 -- # es=1 00:29:13.427 21:32:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:13.427 21:32:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:13.427 21:32:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:13.427 21:32:02 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:13.427 21:32:02 -- common/autotest_common.sh@638 -- # local es=0 00:29:13.427 21:32:02 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:13.427 21:32:02 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:13.427 21:32:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.427 21:32:02 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:13.427 21:32:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:13.427 21:32:02 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:13.427 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.427 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.427 2024/04/26 21:32:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:29:13.427 request: 00:29:13.427 { 00:29:13.427 "method": "bdev_nvme_attach_controller", 00:29:13.427 "params": { 00:29:13.427 "name": "NVMe0", 00:29:13.427 "trtype": "tcp", 00:29:13.427 "traddr": "10.0.0.2", 00:29:13.427 "hostaddr": "10.0.0.2", 00:29:13.427 "hostsvcid": "60000", 00:29:13.427 "adrfam": "ipv4", 00:29:13.427 "trsvcid": "4420", 00:29:13.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:13.427 "multipath": "failover" 00:29:13.427 } 00:29:13.427 } 00:29:13.427 Got JSON-RPC error response 00:29:13.427 GoRPCClient: error on JSON-RPC call 00:29:13.427 21:32:02 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:13.427 21:32:02 -- common/autotest_common.sh@641 -- # es=1 00:29:13.427 21:32:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:13.427 21:32:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:13.427 21:32:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:13.427 21:32:02 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:13.427 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.427 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.427 00:29:13.427 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.427 21:32:02 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:13.427 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.427 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.427 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.427 21:32:02 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:13.427 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.427 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.687 00:29:13.687 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.687 21:32:02 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:13.687 21:32:02 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:13.687 21:32:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:13.687 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:29:13.687 21:32:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:13.688 21:32:02 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:13.688 21:32:02 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:14.624 0 00:29:14.910 21:32:03 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:14.910 21:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.910 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:29:14.910 21:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.910 21:32:03 -- host/multicontroller.sh@100 -- # killprocess 97978 00:29:14.910 21:32:03 -- common/autotest_common.sh@936 -- # '[' -z 97978 ']' 00:29:14.910 21:32:03 -- common/autotest_common.sh@940 -- # kill -0 97978 00:29:14.910 21:32:03 -- common/autotest_common.sh@941 -- # uname 00:29:14.910 21:32:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:14.910 21:32:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97978 00:29:14.910 21:32:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:14.910 21:32:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:14.910 killing process with pid 97978 00:29:14.910 21:32:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97978' 00:29:14.910 21:32:03 -- common/autotest_common.sh@955 -- # kill 97978 00:29:14.910 21:32:03 -- common/autotest_common.sh@960 -- # wait 97978 00:29:14.910 21:32:04 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:14.910 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.910 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:29:14.910 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.910 21:32:04 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:14.910 21:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.910 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:29:14.910 21:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.910 21:32:04 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:14.910 21:32:04 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:14.910 21:32:04 -- common/autotest_common.sh@1598 -- # read -r file 00:29:14.910 21:32:04 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:29:14.910 21:32:04 -- common/autotest_common.sh@1597 -- # sort -u 00:29:14.910 21:32:04 -- common/autotest_common.sh@1599 -- # cat 00:29:14.910 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:29:14.910 [2024-04-26 21:32:01.479634] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:14.910 [2024-04-26 21:32:01.479727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97978 ] 00:29:14.910 [2024-04-26 21:32:01.618846] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.910 [2024-04-26 21:32:01.671732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.910 [2024-04-26 21:32:02.715366] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 68a40938-c652-4d14-a764-22384f6a1a70 already exists 00:29:14.910 [2024-04-26 21:32:02.715437] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:68a40938-c652-4d14-a764-22384f6a1a70 alias for bdev NVMe1n1 00:29:14.910 [2024-04-26 21:32:02.715451] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:14.910 Running I/O for 1 seconds... 00:29:14.910 00:29:14.910 Latency(us) 00:29:14.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.910 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:14.910 NVMe0n1 : 1.00 20508.11 80.11 0.00 0.00 6232.23 3262.49 10932.21 00:29:14.910 =================================================================================================================== 00:29:14.910 Total : 20508.11 80.11 0.00 0.00 6232.23 3262.49 10932.21 00:29:14.910 Received shutdown signal, test time was about 1.000000 seconds 00:29:14.910 00:29:14.910 Latency(us) 00:29:14.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.910 =================================================================================================================== 00:29:14.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.910 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:29:14.910 21:32:04 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:14.910 21:32:04 -- common/autotest_common.sh@1598 -- # read -r file 00:29:14.910 21:32:04 -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:14.910 21:32:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:14.910 21:32:04 -- nvmf/common.sh@117 -- # sync 00:29:15.170 21:32:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:15.170 21:32:04 -- nvmf/common.sh@120 -- # set +e 00:29:15.170 21:32:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:15.170 21:32:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:15.170 rmmod nvme_tcp 00:29:15.170 rmmod nvme_fabrics 00:29:15.170 rmmod nvme_keyring 00:29:15.170 21:32:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:15.170 21:32:04 -- nvmf/common.sh@124 -- # set -e 00:29:15.170 21:32:04 -- nvmf/common.sh@125 -- # return 0 00:29:15.170 21:32:04 -- nvmf/common.sh@478 -- # '[' -n 97926 ']' 00:29:15.170 21:32:04 -- nvmf/common.sh@479 -- # killprocess 97926 00:29:15.170 21:32:04 -- common/autotest_common.sh@936 -- # '[' -z 97926 ']' 00:29:15.170 21:32:04 -- common/autotest_common.sh@940 -- # kill -0 97926 00:29:15.170 21:32:04 -- common/autotest_common.sh@941 -- # uname 00:29:15.170 21:32:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:15.170 21:32:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97926 00:29:15.170 21:32:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:15.170 killing process with pid 97926 00:29:15.170 21:32:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:15.170 21:32:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97926' 00:29:15.170 21:32:04 -- common/autotest_common.sh@955 -- # kill 97926 00:29:15.170 21:32:04 -- common/autotest_common.sh@960 -- # wait 97926 00:29:15.429 21:32:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:15.429 21:32:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:15.429 21:32:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:15.429 21:32:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:15.429 21:32:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:15.429 21:32:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.429 21:32:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.429 21:32:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.429 21:32:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:15.429 00:29:15.429 real 0m4.868s 00:29:15.429 user 0m15.249s 00:29:15.429 sys 0m1.037s 00:29:15.429 21:32:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:15.429 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.429 ************************************ 00:29:15.429 END TEST nvmf_multicontroller 00:29:15.429 ************************************ 00:29:15.429 21:32:04 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:15.429 21:32:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:15.429 21:32:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:15.429 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:29:15.688 ************************************ 00:29:15.688 START TEST nvmf_aer 00:29:15.688 ************************************ 00:29:15.688 21:32:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:15.688 * Looking for test storage... 00:29:15.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:15.688 21:32:04 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:15.688 21:32:04 -- nvmf/common.sh@7 -- # uname -s 00:29:15.688 21:32:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.688 21:32:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.688 21:32:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.688 21:32:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.688 21:32:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.688 21:32:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.688 21:32:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.688 21:32:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.688 21:32:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.688 21:32:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.688 21:32:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:29:15.688 21:32:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:29:15.688 21:32:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.688 21:32:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.688 21:32:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:15.689 21:32:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.689 21:32:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:15.689 21:32:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.689 21:32:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.689 21:32:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.689 21:32:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.689 21:32:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.689 21:32:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.689 21:32:04 -- paths/export.sh@5 -- # export PATH 00:29:15.689 21:32:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.689 21:32:04 -- nvmf/common.sh@47 -- # : 0 00:29:15.689 21:32:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:15.689 21:32:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:15.689 21:32:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.689 21:32:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.689 21:32:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.689 21:32:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:15.689 21:32:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:15.689 21:32:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:15.689 21:32:04 -- host/aer.sh@11 -- # nvmftestinit 00:29:15.689 21:32:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:15.689 21:32:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.689 21:32:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:15.689 21:32:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:15.689 21:32:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:15.689 21:32:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.689 21:32:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.689 21:32:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.689 21:32:04 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:15.689 21:32:04 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:15.689 21:32:04 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:15.689 21:32:04 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:15.689 21:32:04 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:15.689 21:32:04 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:15.689 21:32:04 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.689 21:32:04 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.689 21:32:04 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:15.689 21:32:04 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:15.689 21:32:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:15.689 21:32:04 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:15.689 21:32:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:15.689 21:32:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.689 21:32:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:15.689 21:32:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:15.689 21:32:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:15.689 21:32:04 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:15.689 21:32:04 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:15.689 21:32:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:15.949 Cannot find device "nvmf_tgt_br" 00:29:15.949 21:32:04 -- nvmf/common.sh@155 -- # true 00:29:15.949 21:32:04 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:15.949 Cannot find device "nvmf_tgt_br2" 00:29:15.949 21:32:04 -- nvmf/common.sh@156 -- # true 00:29:15.949 21:32:04 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:15.949 21:32:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:15.949 Cannot find device "nvmf_tgt_br" 00:29:15.949 21:32:04 -- nvmf/common.sh@158 -- # true 00:29:15.949 21:32:04 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:15.949 Cannot find device "nvmf_tgt_br2" 00:29:15.949 21:32:04 -- nvmf/common.sh@159 -- # true 00:29:15.949 21:32:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:15.949 21:32:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:15.949 21:32:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:15.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:15.949 21:32:05 -- nvmf/common.sh@162 -- # true 00:29:15.949 21:32:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:15.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:15.949 21:32:05 -- nvmf/common.sh@163 -- # true 00:29:15.949 21:32:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:15.949 21:32:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:15.949 21:32:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:15.949 21:32:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:15.949 21:32:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:15.949 21:32:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:15.949 21:32:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:15.949 21:32:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:15.949 21:32:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:15.949 21:32:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:15.949 21:32:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:15.949 21:32:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:15.949 21:32:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:15.949 21:32:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:15.949 21:32:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:15.949 21:32:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:15.949 21:32:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:15.949 21:32:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:15.949 21:32:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:15.949 21:32:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:16.208 21:32:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:16.208 21:32:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:16.208 21:32:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:16.208 21:32:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:16.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:29:16.208 00:29:16.208 --- 10.0.0.2 ping statistics --- 00:29:16.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.208 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:16.208 21:32:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:16.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:16.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:29:16.208 00:29:16.208 --- 10.0.0.3 ping statistics --- 00:29:16.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.208 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:16.208 21:32:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:16.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:29:16.208 00:29:16.208 --- 10.0.0.1 ping statistics --- 00:29:16.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.208 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:29:16.208 21:32:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.208 21:32:05 -- nvmf/common.sh@422 -- # return 0 00:29:16.208 21:32:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:16.208 21:32:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.208 21:32:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:16.208 21:32:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:16.208 21:32:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.208 21:32:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:16.208 21:32:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:16.208 21:32:05 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:16.208 21:32:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:16.208 21:32:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:16.208 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.208 21:32:05 -- nvmf/common.sh@470 -- # nvmfpid=98230 00:29:16.208 21:32:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:16.208 21:32:05 -- nvmf/common.sh@471 -- # waitforlisten 98230 00:29:16.208 21:32:05 -- common/autotest_common.sh@817 -- # '[' -z 98230 ']' 00:29:16.208 21:32:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.208 21:32:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:16.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.208 21:32:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.208 21:32:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:16.208 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:29:16.208 [2024-04-26 21:32:05.354047] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:16.208 [2024-04-26 21:32:05.354111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.466 [2024-04-26 21:32:05.492903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.466 [2024-04-26 21:32:05.544281] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.466 [2024-04-26 21:32:05.544348] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.466 [2024-04-26 21:32:05.544356] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.466 [2024-04-26 21:32:05.544361] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.466 [2024-04-26 21:32:05.544366] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.466 [2024-04-26 21:32:05.544544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.466 [2024-04-26 21:32:05.544731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.466 [2024-04-26 21:32:05.544898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.466 [2024-04-26 21:32:05.544901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.035 21:32:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:17.035 21:32:06 -- common/autotest_common.sh@850 -- # return 0 00:29:17.035 21:32:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:17.035 21:32:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:17.035 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.035 21:32:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.035 21:32:06 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:17.035 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.035 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.035 [2024-04-26 21:32:06.277894] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.294 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.294 21:32:06 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:17.294 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.294 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.294 Malloc0 00:29:17.294 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.294 21:32:06 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:17.294 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.294 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.294 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.294 21:32:06 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:17.294 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.294 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.294 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.294 21:32:06 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.294 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.294 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.294 [2024-04-26 21:32:06.339925] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.294 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.294 21:32:06 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:17.294 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.294 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.294 [2024-04-26 21:32:06.351717] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:17.294 [ 00:29:17.294 { 00:29:17.294 "allow_any_host": true, 00:29:17.294 "hosts": [], 00:29:17.294 "listen_addresses": [], 00:29:17.294 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:17.294 "subtype": "Discovery" 00:29:17.294 }, 00:29:17.294 { 00:29:17.294 "allow_any_host": true, 00:29:17.294 "hosts": [], 00:29:17.294 "listen_addresses": [ 00:29:17.294 { 00:29:17.294 "adrfam": "IPv4", 00:29:17.294 "traddr": "10.0.0.2", 00:29:17.294 "transport": "TCP", 00:29:17.294 "trsvcid": "4420", 00:29:17.294 "trtype": "TCP" 00:29:17.294 } 00:29:17.294 ], 00:29:17.294 "max_cntlid": 65519, 00:29:17.294 "max_namespaces": 2, 00:29:17.294 "min_cntlid": 1, 00:29:17.294 "model_number": "SPDK bdev Controller", 00:29:17.294 "namespaces": [ 00:29:17.294 { 00:29:17.294 "bdev_name": "Malloc0", 00:29:17.294 "name": "Malloc0", 00:29:17.294 "nguid": "CAEB212D356E4C98931C8E3F8DF75859", 00:29:17.294 "nsid": 1, 00:29:17.294 "uuid": "caeb212d-356e-4c98-931c-8e3f8df75859" 00:29:17.294 } 00:29:17.294 ], 00:29:17.294 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.294 "serial_number": "SPDK00000000000001", 00:29:17.294 "subtype": "NVMe" 00:29:17.294 } 00:29:17.294 ] 00:29:17.294 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.294 21:32:06 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:17.294 21:32:06 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:17.294 21:32:06 -- host/aer.sh@33 -- # aerpid=98290 00:29:17.294 21:32:06 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:17.294 21:32:06 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:17.294 21:32:06 -- common/autotest_common.sh@1251 -- # local i=0 00:29:17.294 21:32:06 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:17.294 21:32:06 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:29:17.294 21:32:06 -- common/autotest_common.sh@1254 -- # i=1 00:29:17.294 21:32:06 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:29:17.295 21:32:06 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:17.295 21:32:06 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:29:17.295 21:32:06 -- common/autotest_common.sh@1254 -- # i=2 00:29:17.295 21:32:06 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:29:17.553 21:32:06 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:17.553 21:32:06 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:17.553 21:32:06 -- common/autotest_common.sh@1262 -- # return 0 00:29:17.553 21:32:06 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:17.553 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.553 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.553 Malloc1 00:29:17.553 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.553 21:32:06 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:17.553 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.553 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.553 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.553 21:32:06 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:17.553 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.553 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.553 Asynchronous Event Request test 00:29:17.553 Attaching to 10.0.0.2 00:29:17.553 Attached to 10.0.0.2 00:29:17.553 Registering asynchronous event callbacks... 00:29:17.553 Starting namespace attribute notice tests for all controllers... 00:29:17.553 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:17.553 aer_cb - Changed Namespace 00:29:17.553 Cleaning up... 00:29:17.553 [ 00:29:17.553 { 00:29:17.553 "allow_any_host": true, 00:29:17.553 "hosts": [], 00:29:17.553 "listen_addresses": [], 00:29:17.553 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:17.553 "subtype": "Discovery" 00:29:17.553 }, 00:29:17.553 { 00:29:17.553 "allow_any_host": true, 00:29:17.553 "hosts": [], 00:29:17.553 "listen_addresses": [ 00:29:17.553 { 00:29:17.553 "adrfam": "IPv4", 00:29:17.553 "traddr": "10.0.0.2", 00:29:17.553 "transport": "TCP", 00:29:17.553 "trsvcid": "4420", 00:29:17.553 "trtype": "TCP" 00:29:17.553 } 00:29:17.553 ], 00:29:17.553 "max_cntlid": 65519, 00:29:17.553 "max_namespaces": 2, 00:29:17.553 "min_cntlid": 1, 00:29:17.553 "model_number": "SPDK bdev Controller", 00:29:17.553 "namespaces": [ 00:29:17.553 { 00:29:17.553 "bdev_name": "Malloc0", 00:29:17.553 "name": "Malloc0", 00:29:17.553 "nguid": "CAEB212D356E4C98931C8E3F8DF75859", 00:29:17.553 "nsid": 1, 00:29:17.553 "uuid": "caeb212d-356e-4c98-931c-8e3f8df75859" 00:29:17.553 }, 00:29:17.553 { 00:29:17.553 "bdev_name": "Malloc1", 00:29:17.553 "name": "Malloc1", 00:29:17.553 "nguid": "DD54C3869AB54DB0B7E80E78CEB7D69F", 00:29:17.553 "nsid": 2, 00:29:17.553 "uuid": "dd54c386-9ab5-4db0-b7e8-0e78ceb7d69f" 00:29:17.553 } 00:29:17.553 ], 00:29:17.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.553 "serial_number": "SPDK00000000000001", 00:29:17.553 "subtype": "NVMe" 00:29:17.553 } 00:29:17.553 ] 00:29:17.553 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.553 21:32:06 -- host/aer.sh@43 -- # wait 98290 00:29:17.553 21:32:06 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:17.553 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.553 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.554 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.554 21:32:06 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:17.554 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.554 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.554 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.554 21:32:06 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:17.554 21:32:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:17.554 21:32:06 -- common/autotest_common.sh@10 -- # set +x 00:29:17.554 21:32:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:17.554 21:32:06 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:17.554 21:32:06 -- host/aer.sh@51 -- # nvmftestfini 00:29:17.554 21:32:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:17.554 21:32:06 -- nvmf/common.sh@117 -- # sync 00:29:17.554 21:32:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.554 21:32:06 -- nvmf/common.sh@120 -- # set +e 00:29:17.554 21:32:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.554 21:32:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.554 rmmod nvme_tcp 00:29:17.554 rmmod nvme_fabrics 00:29:17.813 rmmod nvme_keyring 00:29:17.813 21:32:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.813 21:32:06 -- nvmf/common.sh@124 -- # set -e 00:29:17.813 21:32:06 -- nvmf/common.sh@125 -- # return 0 00:29:17.813 21:32:06 -- nvmf/common.sh@478 -- # '[' -n 98230 ']' 00:29:17.813 21:32:06 -- nvmf/common.sh@479 -- # killprocess 98230 00:29:17.813 21:32:06 -- common/autotest_common.sh@936 -- # '[' -z 98230 ']' 00:29:17.813 21:32:06 -- common/autotest_common.sh@940 -- # kill -0 98230 00:29:17.813 21:32:06 -- common/autotest_common.sh@941 -- # uname 00:29:17.813 21:32:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:17.813 21:32:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98230 00:29:17.813 21:32:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:17.813 21:32:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:17.813 killing process with pid 98230 00:29:17.813 21:32:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98230' 00:29:17.813 21:32:06 -- common/autotest_common.sh@955 -- # kill 98230 00:29:17.813 [2024-04-26 21:32:06.881598] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:17.813 21:32:06 -- common/autotest_common.sh@960 -- # wait 98230 00:29:18.074 21:32:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:18.074 21:32:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:18.074 21:32:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:18.074 21:32:07 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:18.074 21:32:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:18.074 21:32:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.074 21:32:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.074 21:32:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.074 21:32:07 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:18.074 00:29:18.074 real 0m2.429s 00:29:18.074 user 0m6.299s 00:29:18.074 sys 0m0.753s 00:29:18.074 21:32:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:18.074 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.074 ************************************ 00:29:18.074 END TEST nvmf_aer 00:29:18.074 ************************************ 00:29:18.074 21:32:07 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:18.074 21:32:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:18.074 21:32:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:18.074 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.074 ************************************ 00:29:18.074 START TEST nvmf_async_init 00:29:18.074 ************************************ 00:29:18.074 21:32:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:18.334 * Looking for test storage... 00:29:18.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:18.334 21:32:07 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:18.334 21:32:07 -- nvmf/common.sh@7 -- # uname -s 00:29:18.334 21:32:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.334 21:32:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.334 21:32:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.334 21:32:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.334 21:32:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.334 21:32:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.334 21:32:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.334 21:32:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.334 21:32:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.334 21:32:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.334 21:32:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:29:18.334 21:32:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:29:18.334 21:32:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.334 21:32:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.334 21:32:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:18.334 21:32:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.334 21:32:07 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:18.334 21:32:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.334 21:32:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.334 21:32:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.335 21:32:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.335 21:32:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.335 21:32:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.335 21:32:07 -- paths/export.sh@5 -- # export PATH 00:29:18.335 21:32:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.335 21:32:07 -- nvmf/common.sh@47 -- # : 0 00:29:18.335 21:32:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:18.335 21:32:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:18.335 21:32:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.335 21:32:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.335 21:32:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.335 21:32:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:18.335 21:32:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:18.335 21:32:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:18.335 21:32:07 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:18.335 21:32:07 -- host/async_init.sh@14 -- # null_block_size=512 00:29:18.335 21:32:07 -- host/async_init.sh@15 -- # null_bdev=null0 00:29:18.335 21:32:07 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:18.335 21:32:07 -- host/async_init.sh@20 -- # uuidgen 00:29:18.335 21:32:07 -- host/async_init.sh@20 -- # tr -d - 00:29:18.335 21:32:07 -- host/async_init.sh@20 -- # nguid=3144081c6f564d049e8d4a676d64f994 00:29:18.335 21:32:07 -- host/async_init.sh@22 -- # nvmftestinit 00:29:18.335 21:32:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:18.335 21:32:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.335 21:32:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:18.335 21:32:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:18.335 21:32:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:18.335 21:32:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.335 21:32:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.335 21:32:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.335 21:32:07 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:18.335 21:32:07 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:18.335 21:32:07 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:18.335 21:32:07 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:18.335 21:32:07 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:18.335 21:32:07 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:18.335 21:32:07 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.335 21:32:07 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.335 21:32:07 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:18.335 21:32:07 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:18.335 21:32:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:18.335 21:32:07 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:18.335 21:32:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:18.335 21:32:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.335 21:32:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:18.335 21:32:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:18.335 21:32:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:18.335 21:32:07 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:18.335 21:32:07 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:18.335 21:32:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:18.335 Cannot find device "nvmf_tgt_br" 00:29:18.335 21:32:07 -- nvmf/common.sh@155 -- # true 00:29:18.335 21:32:07 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:18.335 Cannot find device "nvmf_tgt_br2" 00:29:18.335 21:32:07 -- nvmf/common.sh@156 -- # true 00:29:18.335 21:32:07 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:18.335 21:32:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:18.335 Cannot find device "nvmf_tgt_br" 00:29:18.335 21:32:07 -- nvmf/common.sh@158 -- # true 00:29:18.335 21:32:07 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:18.335 Cannot find device "nvmf_tgt_br2" 00:29:18.335 21:32:07 -- nvmf/common.sh@159 -- # true 00:29:18.335 21:32:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:18.595 21:32:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:18.595 21:32:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:18.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:18.595 21:32:07 -- nvmf/common.sh@162 -- # true 00:29:18.595 21:32:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:18.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:18.595 21:32:07 -- nvmf/common.sh@163 -- # true 00:29:18.595 21:32:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:18.595 21:32:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:18.595 21:32:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:18.595 21:32:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:18.595 21:32:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:18.595 21:32:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:18.595 21:32:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:18.595 21:32:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:18.595 21:32:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:18.595 21:32:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:18.595 21:32:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:18.595 21:32:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:18.595 21:32:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:18.595 21:32:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:18.595 21:32:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:18.595 21:32:07 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:18.595 21:32:07 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:18.595 21:32:07 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:18.595 21:32:07 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:18.595 21:32:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:18.595 21:32:07 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:18.595 21:32:07 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:18.595 21:32:07 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:18.595 21:32:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:18.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:29:18.855 00:29:18.855 --- 10.0.0.2 ping statistics --- 00:29:18.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.855 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:29:18.855 21:32:07 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:18.855 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:18.855 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:29:18.855 00:29:18.855 --- 10.0.0.3 ping statistics --- 00:29:18.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.855 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:29:18.855 21:32:07 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:18.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:29:18.855 00:29:18.855 --- 10.0.0.1 ping statistics --- 00:29:18.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.855 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:29:18.855 21:32:07 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.855 21:32:07 -- nvmf/common.sh@422 -- # return 0 00:29:18.855 21:32:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:18.855 21:32:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.855 21:32:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:18.855 21:32:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:18.855 21:32:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.855 21:32:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:18.855 21:32:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:18.855 21:32:07 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:18.855 21:32:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:18.855 21:32:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:18.855 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.855 21:32:07 -- nvmf/common.sh@470 -- # nvmfpid=98466 00:29:18.855 21:32:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:18.855 21:32:07 -- nvmf/common.sh@471 -- # waitforlisten 98466 00:29:18.855 21:32:07 -- common/autotest_common.sh@817 -- # '[' -z 98466 ']' 00:29:18.855 21:32:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.855 21:32:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:18.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.855 21:32:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.855 21:32:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:18.855 21:32:07 -- common/autotest_common.sh@10 -- # set +x 00:29:18.855 [2024-04-26 21:32:07.957185] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:18.855 [2024-04-26 21:32:07.957249] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.855 [2024-04-26 21:32:08.094860] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.114 [2024-04-26 21:32:08.143751] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.114 [2024-04-26 21:32:08.143793] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.114 [2024-04-26 21:32:08.143800] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.114 [2024-04-26 21:32:08.143805] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.114 [2024-04-26 21:32:08.143809] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.114 [2024-04-26 21:32:08.143829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.683 21:32:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:19.683 21:32:08 -- common/autotest_common.sh@850 -- # return 0 00:29:19.683 21:32:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:19.683 21:32:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:19.683 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.683 21:32:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.683 21:32:08 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:19.683 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.683 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.683 [2024-04-26 21:32:08.867105] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.683 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.683 21:32:08 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:19.683 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.683 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.683 null0 00:29:19.683 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.683 21:32:08 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:19.683 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.683 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.683 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.683 21:32:08 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:19.683 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.683 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.683 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.683 21:32:08 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3144081c6f564d049e8d4a676d64f994 00:29:19.683 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.683 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.683 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.683 21:32:08 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:19.683 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.683 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.683 [2024-04-26 21:32:08.919107] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.683 21:32:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.683 21:32:08 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:19.683 21:32:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.683 21:32:08 -- common/autotest_common.sh@10 -- # set +x 00:29:19.952 nvme0n1 00:29:19.952 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.952 21:32:09 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:19.952 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.952 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:19.952 [ 00:29:19.952 { 00:29:19.952 "aliases": [ 00:29:19.952 "3144081c-6f56-4d04-9e8d-4a676d64f994" 00:29:19.952 ], 00:29:19.952 "assigned_rate_limits": { 00:29:19.952 "r_mbytes_per_sec": 0, 00:29:19.952 "rw_ios_per_sec": 0, 00:29:19.952 "rw_mbytes_per_sec": 0, 00:29:19.953 "w_mbytes_per_sec": 0 00:29:19.953 }, 00:29:19.953 "block_size": 512, 00:29:19.953 "claimed": false, 00:29:19.953 "driver_specific": { 00:29:19.953 "mp_policy": "active_passive", 00:29:19.953 "nvme": [ 00:29:19.953 { 00:29:19.953 "ctrlr_data": { 00:29:19.953 "ana_reporting": false, 00:29:19.953 "cntlid": 1, 00:29:19.953 "firmware_revision": "24.05", 00:29:19.953 "model_number": "SPDK bdev Controller", 00:29:19.953 "multi_ctrlr": true, 00:29:19.953 "oacs": { 00:29:19.953 "firmware": 0, 00:29:19.953 "format": 0, 00:29:19.953 "ns_manage": 0, 00:29:19.953 "security": 0 00:29:19.953 }, 00:29:19.953 "serial_number": "00000000000000000000", 00:29:19.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.953 "vendor_id": "0x8086" 00:29:19.953 }, 00:29:19.953 "ns_data": { 00:29:19.953 "can_share": true, 00:29:19.953 "id": 1 00:29:19.953 }, 00:29:19.953 "trid": { 00:29:19.953 "adrfam": "IPv4", 00:29:19.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:19.953 "traddr": "10.0.0.2", 00:29:19.953 "trsvcid": "4420", 00:29:19.953 "trtype": "TCP" 00:29:19.953 }, 00:29:19.953 "vs": { 00:29:19.953 "nvme_version": "1.3" 00:29:19.953 } 00:29:19.953 } 00:29:19.953 ] 00:29:19.953 }, 00:29:19.953 "memory_domains": [ 00:29:19.953 { 00:29:19.953 "dma_device_id": "system", 00:29:19.953 "dma_device_type": 1 00:29:19.953 } 00:29:19.953 ], 00:29:19.953 "name": "nvme0n1", 00:29:19.953 "num_blocks": 2097152, 00:29:19.953 "product_name": "NVMe disk", 00:29:19.953 "supported_io_types": { 00:29:19.953 "abort": true, 00:29:19.953 "compare": true, 00:29:19.953 "compare_and_write": true, 00:29:19.953 "flush": true, 00:29:19.953 "nvme_admin": true, 00:29:19.953 "nvme_io": true, 00:29:19.953 "read": true, 00:29:19.953 "reset": true, 00:29:19.953 "unmap": false, 00:29:19.953 "write": true, 00:29:19.953 "write_zeroes": true 00:29:19.953 }, 00:29:19.953 "uuid": "3144081c-6f56-4d04-9e8d-4a676d64f994", 00:29:19.953 "zoned": false 00:29:19.953 } 00:29:19.953 ] 00:29:19.953 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.953 21:32:09 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:19.953 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.953 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:19.953 [2024-04-26 21:32:09.182545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:19.953 [2024-04-26 21:32:09.182637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20706c0 (9): Bad file descriptor 00:29:20.239 [2024-04-26 21:32:09.314464] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:20.239 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.239 21:32:09 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:20.239 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.239 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.239 [ 00:29:20.239 { 00:29:20.239 "aliases": [ 00:29:20.239 "3144081c-6f56-4d04-9e8d-4a676d64f994" 00:29:20.239 ], 00:29:20.239 "assigned_rate_limits": { 00:29:20.239 "r_mbytes_per_sec": 0, 00:29:20.239 "rw_ios_per_sec": 0, 00:29:20.239 "rw_mbytes_per_sec": 0, 00:29:20.239 "w_mbytes_per_sec": 0 00:29:20.239 }, 00:29:20.239 "block_size": 512, 00:29:20.239 "claimed": false, 00:29:20.239 "driver_specific": { 00:29:20.239 "mp_policy": "active_passive", 00:29:20.239 "nvme": [ 00:29:20.239 { 00:29:20.239 "ctrlr_data": { 00:29:20.239 "ana_reporting": false, 00:29:20.239 "cntlid": 2, 00:29:20.239 "firmware_revision": "24.05", 00:29:20.239 "model_number": "SPDK bdev Controller", 00:29:20.239 "multi_ctrlr": true, 00:29:20.239 "oacs": { 00:29:20.239 "firmware": 0, 00:29:20.239 "format": 0, 00:29:20.239 "ns_manage": 0, 00:29:20.239 "security": 0 00:29:20.239 }, 00:29:20.239 "serial_number": "00000000000000000000", 00:29:20.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.239 "vendor_id": "0x8086" 00:29:20.239 }, 00:29:20.239 "ns_data": { 00:29:20.239 "can_share": true, 00:29:20.239 "id": 1 00:29:20.239 }, 00:29:20.239 "trid": { 00:29:20.239 "adrfam": "IPv4", 00:29:20.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.239 "traddr": "10.0.0.2", 00:29:20.239 "trsvcid": "4420", 00:29:20.239 "trtype": "TCP" 00:29:20.239 }, 00:29:20.239 "vs": { 00:29:20.239 "nvme_version": "1.3" 00:29:20.239 } 00:29:20.239 } 00:29:20.239 ] 00:29:20.239 }, 00:29:20.239 "memory_domains": [ 00:29:20.239 { 00:29:20.239 "dma_device_id": "system", 00:29:20.239 "dma_device_type": 1 00:29:20.239 } 00:29:20.239 ], 00:29:20.239 "name": "nvme0n1", 00:29:20.239 "num_blocks": 2097152, 00:29:20.239 "product_name": "NVMe disk", 00:29:20.239 "supported_io_types": { 00:29:20.239 "abort": true, 00:29:20.239 "compare": true, 00:29:20.239 "compare_and_write": true, 00:29:20.239 "flush": true, 00:29:20.239 "nvme_admin": true, 00:29:20.239 "nvme_io": true, 00:29:20.239 "read": true, 00:29:20.239 "reset": true, 00:29:20.239 "unmap": false, 00:29:20.239 "write": true, 00:29:20.239 "write_zeroes": true 00:29:20.239 }, 00:29:20.239 "uuid": "3144081c-6f56-4d04-9e8d-4a676d64f994", 00:29:20.239 "zoned": false 00:29:20.239 } 00:29:20.239 ] 00:29:20.239 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.239 21:32:09 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.239 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.239 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.239 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.239 21:32:09 -- host/async_init.sh@53 -- # mktemp 00:29:20.239 21:32:09 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.QXnAoH6Pyh 00:29:20.239 21:32:09 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:20.239 21:32:09 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.QXnAoH6Pyh 00:29:20.239 21:32:09 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:20.239 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.239 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.239 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.239 21:32:09 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:20.239 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.239 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.239 [2024-04-26 21:32:09.390388] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:20.239 [2024-04-26 21:32:09.390532] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:20.239 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.239 21:32:09 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QXnAoH6Pyh 00:29:20.239 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.239 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.239 [2024-04-26 21:32:09.398370] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:20.239 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.239 21:32:09 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QXnAoH6Pyh 00:29:20.239 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.239 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.239 [2024-04-26 21:32:09.410361] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:20.239 [2024-04-26 21:32:09.410418] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:20.239 nvme0n1 00:29:20.239 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.239 21:32:09 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:20.239 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.239 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.499 [ 00:29:20.499 { 00:29:20.499 "aliases": [ 00:29:20.499 "3144081c-6f56-4d04-9e8d-4a676d64f994" 00:29:20.499 ], 00:29:20.499 "assigned_rate_limits": { 00:29:20.499 "r_mbytes_per_sec": 0, 00:29:20.499 "rw_ios_per_sec": 0, 00:29:20.499 "rw_mbytes_per_sec": 0, 00:29:20.499 "w_mbytes_per_sec": 0 00:29:20.499 }, 00:29:20.499 "block_size": 512, 00:29:20.499 "claimed": false, 00:29:20.499 "driver_specific": { 00:29:20.499 "mp_policy": "active_passive", 00:29:20.499 "nvme": [ 00:29:20.499 { 00:29:20.499 "ctrlr_data": { 00:29:20.499 "ana_reporting": false, 00:29:20.499 "cntlid": 3, 00:29:20.499 "firmware_revision": "24.05", 00:29:20.499 "model_number": "SPDK bdev Controller", 00:29:20.499 "multi_ctrlr": true, 00:29:20.499 "oacs": { 00:29:20.499 "firmware": 0, 00:29:20.499 "format": 0, 00:29:20.499 "ns_manage": 0, 00:29:20.499 "security": 0 00:29:20.499 }, 00:29:20.499 "serial_number": "00000000000000000000", 00:29:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.499 "vendor_id": "0x8086" 00:29:20.499 }, 00:29:20.499 "ns_data": { 00:29:20.499 "can_share": true, 00:29:20.499 "id": 1 00:29:20.499 }, 00:29:20.499 "trid": { 00:29:20.499 "adrfam": "IPv4", 00:29:20.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:20.499 "traddr": "10.0.0.2", 00:29:20.499 "trsvcid": "4421", 00:29:20.499 "trtype": "TCP" 00:29:20.499 }, 00:29:20.499 "vs": { 00:29:20.499 "nvme_version": "1.3" 00:29:20.499 } 00:29:20.499 } 00:29:20.499 ] 00:29:20.499 }, 00:29:20.499 "memory_domains": [ 00:29:20.499 { 00:29:20.499 "dma_device_id": "system", 00:29:20.499 "dma_device_type": 1 00:29:20.499 } 00:29:20.499 ], 00:29:20.499 "name": "nvme0n1", 00:29:20.499 "num_blocks": 2097152, 00:29:20.499 "product_name": "NVMe disk", 00:29:20.499 "supported_io_types": { 00:29:20.499 "abort": true, 00:29:20.499 "compare": true, 00:29:20.499 "compare_and_write": true, 00:29:20.499 "flush": true, 00:29:20.499 "nvme_admin": true, 00:29:20.499 "nvme_io": true, 00:29:20.499 "read": true, 00:29:20.499 "reset": true, 00:29:20.499 "unmap": false, 00:29:20.499 "write": true, 00:29:20.499 "write_zeroes": true 00:29:20.499 }, 00:29:20.499 "uuid": "3144081c-6f56-4d04-9e8d-4a676d64f994", 00:29:20.499 "zoned": false 00:29:20.499 } 00:29:20.499 ] 00:29:20.499 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.499 21:32:09 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:20.499 21:32:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.499 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.499 21:32:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.499 21:32:09 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.QXnAoH6Pyh 00:29:20.499 21:32:09 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:20.499 21:32:09 -- host/async_init.sh@78 -- # nvmftestfini 00:29:20.499 21:32:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:20.499 21:32:09 -- nvmf/common.sh@117 -- # sync 00:29:20.499 21:32:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.499 21:32:09 -- nvmf/common.sh@120 -- # set +e 00:29:20.499 21:32:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.499 21:32:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.499 rmmod nvme_tcp 00:29:20.499 rmmod nvme_fabrics 00:29:20.499 rmmod nvme_keyring 00:29:20.499 21:32:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.499 21:32:09 -- nvmf/common.sh@124 -- # set -e 00:29:20.499 21:32:09 -- nvmf/common.sh@125 -- # return 0 00:29:20.499 21:32:09 -- nvmf/common.sh@478 -- # '[' -n 98466 ']' 00:29:20.499 21:32:09 -- nvmf/common.sh@479 -- # killprocess 98466 00:29:20.499 21:32:09 -- common/autotest_common.sh@936 -- # '[' -z 98466 ']' 00:29:20.499 21:32:09 -- common/autotest_common.sh@940 -- # kill -0 98466 00:29:20.499 21:32:09 -- common/autotest_common.sh@941 -- # uname 00:29:20.499 21:32:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:20.499 21:32:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98466 00:29:20.499 21:32:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:20.499 21:32:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:20.499 killing process with pid 98466 00:29:20.499 21:32:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98466' 00:29:20.499 21:32:09 -- common/autotest_common.sh@955 -- # kill 98466 00:29:20.499 [2024-04-26 21:32:09.694448] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:20.499 [2024-04-26 21:32:09.694484] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:20.499 21:32:09 -- common/autotest_common.sh@960 -- # wait 98466 00:29:20.759 21:32:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:20.759 21:32:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:20.759 21:32:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:20.759 21:32:09 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.759 21:32:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.759 21:32:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.759 21:32:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.759 21:32:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.759 21:32:09 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:20.759 00:29:20.759 real 0m2.663s 00:29:20.759 user 0m2.278s 00:29:20.759 sys 0m0.711s 00:29:20.759 21:32:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:20.759 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:29:20.759 ************************************ 00:29:20.759 END TEST nvmf_async_init 00:29:20.759 ************************************ 00:29:21.018 21:32:10 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:21.018 21:32:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:21.018 21:32:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:21.018 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:21.018 ************************************ 00:29:21.018 START TEST dma 00:29:21.018 ************************************ 00:29:21.019 21:32:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:21.019 * Looking for test storage... 00:29:21.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:21.019 21:32:10 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:21.019 21:32:10 -- nvmf/common.sh@7 -- # uname -s 00:29:21.019 21:32:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.019 21:32:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.019 21:32:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.019 21:32:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.019 21:32:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.019 21:32:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.019 21:32:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.019 21:32:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.019 21:32:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.019 21:32:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.019 21:32:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:29:21.019 21:32:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:29:21.019 21:32:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.019 21:32:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.019 21:32:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:21.019 21:32:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.019 21:32:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:21.019 21:32:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.019 21:32:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.019 21:32:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.019 21:32:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.019 21:32:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.019 21:32:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.019 21:32:10 -- paths/export.sh@5 -- # export PATH 00:29:21.019 21:32:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.019 21:32:10 -- nvmf/common.sh@47 -- # : 0 00:29:21.019 21:32:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:21.019 21:32:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:21.019 21:32:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.019 21:32:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.019 21:32:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.019 21:32:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:21.019 21:32:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:21.019 21:32:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:21.019 21:32:10 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:21.019 21:32:10 -- host/dma.sh@13 -- # exit 0 00:29:21.019 00:29:21.019 real 0m0.158s 00:29:21.019 user 0m0.077s 00:29:21.019 sys 0m0.091s 00:29:21.019 21:32:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:21.019 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:21.019 ************************************ 00:29:21.019 END TEST dma 00:29:21.019 ************************************ 00:29:21.279 21:32:10 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:21.279 21:32:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:21.279 21:32:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:21.279 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:21.279 ************************************ 00:29:21.279 START TEST nvmf_identify 00:29:21.279 ************************************ 00:29:21.279 21:32:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:21.279 * Looking for test storage... 00:29:21.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:21.279 21:32:10 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:21.279 21:32:10 -- nvmf/common.sh@7 -- # uname -s 00:29:21.539 21:32:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.539 21:32:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.539 21:32:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.539 21:32:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.539 21:32:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.539 21:32:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.539 21:32:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.539 21:32:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.539 21:32:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.539 21:32:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.539 21:32:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:29:21.539 21:32:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:29:21.539 21:32:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.539 21:32:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.539 21:32:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:21.539 21:32:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.539 21:32:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:21.539 21:32:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.540 21:32:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.540 21:32:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.540 21:32:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.540 21:32:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.540 21:32:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.540 21:32:10 -- paths/export.sh@5 -- # export PATH 00:29:21.540 21:32:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.540 21:32:10 -- nvmf/common.sh@47 -- # : 0 00:29:21.540 21:32:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:21.540 21:32:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:21.540 21:32:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.540 21:32:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.540 21:32:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.540 21:32:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:21.540 21:32:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:21.540 21:32:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:21.540 21:32:10 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:21.540 21:32:10 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:21.540 21:32:10 -- host/identify.sh@14 -- # nvmftestinit 00:29:21.540 21:32:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:21.540 21:32:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.540 21:32:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:21.540 21:32:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:21.540 21:32:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:21.540 21:32:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.540 21:32:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:21.540 21:32:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.540 21:32:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:21.540 21:32:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:21.540 21:32:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:21.540 21:32:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:21.540 21:32:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:21.540 21:32:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:21.540 21:32:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.540 21:32:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.540 21:32:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:21.540 21:32:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:21.540 21:32:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:21.540 21:32:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:21.540 21:32:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:21.540 21:32:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.540 21:32:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:21.540 21:32:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:21.540 21:32:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:21.540 21:32:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:21.540 21:32:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:21.540 21:32:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:21.540 Cannot find device "nvmf_tgt_br" 00:29:21.540 21:32:10 -- nvmf/common.sh@155 -- # true 00:29:21.540 21:32:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:21.540 Cannot find device "nvmf_tgt_br2" 00:29:21.540 21:32:10 -- nvmf/common.sh@156 -- # true 00:29:21.540 21:32:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:21.540 21:32:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:21.540 Cannot find device "nvmf_tgt_br" 00:29:21.540 21:32:10 -- nvmf/common.sh@158 -- # true 00:29:21.540 21:32:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:21.540 Cannot find device "nvmf_tgt_br2" 00:29:21.540 21:32:10 -- nvmf/common.sh@159 -- # true 00:29:21.540 21:32:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:21.540 21:32:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:21.540 21:32:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:21.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:21.540 21:32:10 -- nvmf/common.sh@162 -- # true 00:29:21.540 21:32:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:21.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:21.540 21:32:10 -- nvmf/common.sh@163 -- # true 00:29:21.540 21:32:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:21.540 21:32:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:21.540 21:32:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:21.540 21:32:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:21.799 21:32:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:21.799 21:32:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:21.799 21:32:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:21.799 21:32:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:21.799 21:32:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:21.799 21:32:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:21.799 21:32:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:21.799 21:32:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:21.799 21:32:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:21.799 21:32:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:21.799 21:32:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:21.799 21:32:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:21.799 21:32:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:21.799 21:32:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:21.799 21:32:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:21.799 21:32:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:21.799 21:32:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:21.799 21:32:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:21.799 21:32:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:21.799 21:32:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:21.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:29:21.799 00:29:21.799 --- 10.0.0.2 ping statistics --- 00:29:21.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.799 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:29:21.799 21:32:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:21.799 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:21.799 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:29:21.799 00:29:21.799 --- 10.0.0.3 ping statistics --- 00:29:21.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.799 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:29:21.799 21:32:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:21.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:29:21.799 00:29:21.799 --- 10.0.0.1 ping statistics --- 00:29:21.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.799 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:29:21.799 21:32:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.799 21:32:10 -- nvmf/common.sh@422 -- # return 0 00:29:21.799 21:32:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:21.799 21:32:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.799 21:32:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:21.799 21:32:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:21.799 21:32:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.799 21:32:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:21.799 21:32:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:21.799 21:32:10 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:21.799 21:32:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:21.799 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:21.799 21:32:10 -- host/identify.sh@19 -- # nvmfpid=98746 00:29:21.799 21:32:10 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:21.799 21:32:10 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:21.799 21:32:10 -- host/identify.sh@23 -- # waitforlisten 98746 00:29:21.799 21:32:10 -- common/autotest_common.sh@817 -- # '[' -z 98746 ']' 00:29:21.799 21:32:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.799 21:32:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:21.799 21:32:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.799 21:32:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:21.799 21:32:10 -- common/autotest_common.sh@10 -- # set +x 00:29:21.799 [2024-04-26 21:32:11.025625] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:21.799 [2024-04-26 21:32:11.025691] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.058 [2024-04-26 21:32:11.151195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:22.058 [2024-04-26 21:32:11.203084] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.058 [2024-04-26 21:32:11.203134] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.058 [2024-04-26 21:32:11.203141] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.058 [2024-04-26 21:32:11.203147] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.058 [2024-04-26 21:32:11.203151] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.058 [2024-04-26 21:32:11.203215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.058 [2024-04-26 21:32:11.203296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.058 [2024-04-26 21:32:11.203383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.058 [2024-04-26 21:32:11.203387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.994 21:32:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:22.994 21:32:11 -- common/autotest_common.sh@850 -- # return 0 00:29:22.994 21:32:11 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.994 21:32:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.994 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 [2024-04-26 21:32:11.936950] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.994 21:32:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.994 21:32:11 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:22.994 21:32:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:22.994 21:32:11 -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 21:32:12 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:22.994 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.994 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 Malloc0 00:29:22.994 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.994 21:32:12 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.994 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.994 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.994 21:32:12 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:22.994 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.994 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.994 21:32:12 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.994 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.994 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 [2024-04-26 21:32:12.066159] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.994 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.994 21:32:12 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:22.994 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.994 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.994 21:32:12 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:22.994 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.994 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:29:22.994 [2024-04-26 21:32:12.089861] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:22.994 [ 00:29:22.994 { 00:29:22.994 "allow_any_host": true, 00:29:22.994 "hosts": [], 00:29:22.994 "listen_addresses": [ 00:29:22.994 { 00:29:22.994 "adrfam": "IPv4", 00:29:22.994 "traddr": "10.0.0.2", 00:29:22.994 "transport": "TCP", 00:29:22.994 "trsvcid": "4420", 00:29:22.994 "trtype": "TCP" 00:29:22.994 } 00:29:22.994 ], 00:29:22.994 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:22.994 "subtype": "Discovery" 00:29:22.994 }, 00:29:22.994 { 00:29:22.994 "allow_any_host": true, 00:29:22.994 "hosts": [], 00:29:22.994 "listen_addresses": [ 00:29:22.994 { 00:29:22.994 "adrfam": "IPv4", 00:29:22.994 "traddr": "10.0.0.2", 00:29:22.994 "transport": "TCP", 00:29:22.994 "trsvcid": "4420", 00:29:22.994 "trtype": "TCP" 00:29:22.994 } 00:29:22.994 ], 00:29:22.994 "max_cntlid": 65519, 00:29:22.995 "max_namespaces": 32, 00:29:22.995 "min_cntlid": 1, 00:29:22.995 "model_number": "SPDK bdev Controller", 00:29:22.995 "namespaces": [ 00:29:22.995 { 00:29:22.995 "bdev_name": "Malloc0", 00:29:22.995 "eui64": "ABCDEF0123456789", 00:29:22.995 "name": "Malloc0", 00:29:22.995 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:22.995 "nsid": 1, 00:29:22.995 "uuid": "53a01878-ce1c-4bbd-9257-6a0631536c46" 00:29:22.995 } 00:29:22.995 ], 00:29:22.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.995 "serial_number": "SPDK00000000000001", 00:29:22.995 "subtype": "NVMe" 00:29:22.995 } 00:29:22.995 ] 00:29:22.995 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.995 21:32:12 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:22.995 [2024-04-26 21:32:12.133283] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:22.995 [2024-04-26 21:32:12.133354] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98799 ] 00:29:23.257 [2024-04-26 21:32:12.266179] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:23.257 [2024-04-26 21:32:12.266246] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:23.257 [2024-04-26 21:32:12.266250] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:23.257 [2024-04-26 21:32:12.266262] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:23.257 [2024-04-26 21:32:12.266271] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:23.257 [2024-04-26 21:32:12.266458] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:23.257 [2024-04-26 21:32:12.266505] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa97320 0 00:29:23.257 [2024-04-26 21:32:12.271358] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:23.257 [2024-04-26 21:32:12.271377] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:23.257 [2024-04-26 21:32:12.271380] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:23.257 [2024-04-26 21:32:12.271383] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:23.257 [2024-04-26 21:32:12.271422] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.271427] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.271430] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.257 [2024-04-26 21:32:12.271442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:23.257 [2024-04-26 21:32:12.271470] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.257 [2024-04-26 21:32:12.279346] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.257 [2024-04-26 21:32:12.279359] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.257 [2024-04-26 21:32:12.279363] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.279366] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae05f0) on tqpair=0xa97320 00:29:23.257 [2024-04-26 21:32:12.279375] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:23.257 [2024-04-26 21:32:12.279381] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:23.257 [2024-04-26 21:32:12.279385] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:23.257 [2024-04-26 21:32:12.279400] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.279403] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.279406] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.257 [2024-04-26 21:32:12.279413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.257 [2024-04-26 21:32:12.279436] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.257 [2024-04-26 21:32:12.279509] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.257 [2024-04-26 21:32:12.279519] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.257 [2024-04-26 21:32:12.279522] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.279525] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae05f0) on tqpair=0xa97320 00:29:23.257 [2024-04-26 21:32:12.279532] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:23.257 [2024-04-26 21:32:12.279537] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:23.257 [2024-04-26 21:32:12.279543] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.279545] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.279548] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.257 [2024-04-26 21:32:12.279554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.257 [2024-04-26 21:32:12.279568] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.257 [2024-04-26 21:32:12.279655] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.257 [2024-04-26 21:32:12.279665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.257 [2024-04-26 21:32:12.279668] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.279671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae05f0) on tqpair=0xa97320 00:29:23.257 [2024-04-26 21:32:12.279675] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:23.257 [2024-04-26 21:32:12.279681] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:23.257 [2024-04-26 21:32:12.279685] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.279688] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.257 [2024-04-26 21:32:12.279691] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.257 [2024-04-26 21:32:12.279696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.257 [2024-04-26 21:32:12.279708] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.257 [2024-04-26 21:32:12.279785] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.258 [2024-04-26 21:32:12.279795] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.258 [2024-04-26 21:32:12.279797] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.279800] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae05f0) on tqpair=0xa97320 00:29:23.258 [2024-04-26 21:32:12.279804] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:23.258 [2024-04-26 21:32:12.279811] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.279814] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.279816] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.279821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.258 [2024-04-26 21:32:12.279833] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.258 [2024-04-26 21:32:12.279917] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.258 [2024-04-26 21:32:12.279926] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.258 [2024-04-26 21:32:12.279929] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.279931] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae05f0) on tqpair=0xa97320 00:29:23.258 [2024-04-26 21:32:12.279935] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:23.258 [2024-04-26 21:32:12.279939] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:23.258 [2024-04-26 21:32:12.279944] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:23.258 [2024-04-26 21:32:12.280048] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:23.258 [2024-04-26 21:32:12.280053] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:23.258 [2024-04-26 21:32:12.280061] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280064] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280066] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.280071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.258 [2024-04-26 21:32:12.280084] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.258 [2024-04-26 21:32:12.280162] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.258 [2024-04-26 21:32:12.280170] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.258 [2024-04-26 21:32:12.280172] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280175] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae05f0) on tqpair=0xa97320 00:29:23.258 [2024-04-26 21:32:12.280178] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:23.258 [2024-04-26 21:32:12.280185] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280188] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280190] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.280196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.258 [2024-04-26 21:32:12.280207] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.258 [2024-04-26 21:32:12.280284] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.258 [2024-04-26 21:32:12.280291] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.258 [2024-04-26 21:32:12.280294] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280296] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae05f0) on tqpair=0xa97320 00:29:23.258 [2024-04-26 21:32:12.280300] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:23.258 [2024-04-26 21:32:12.280303] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:23.258 [2024-04-26 21:32:12.280308] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:23.258 [2024-04-26 21:32:12.280315] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:23.258 [2024-04-26 21:32:12.280322] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280325] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.280338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.258 [2024-04-26 21:32:12.280352] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.258 [2024-04-26 21:32:12.280504] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.258 [2024-04-26 21:32:12.280515] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.258 [2024-04-26 21:32:12.280517] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280520] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa97320): datao=0, datal=4096, cccid=0 00:29:23.258 [2024-04-26 21:32:12.280524] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae05f0) on tqpair(0xa97320): expected_datao=0, payload_size=4096 00:29:23.258 [2024-04-26 21:32:12.280527] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280534] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280537] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280544] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.258 [2024-04-26 21:32:12.280549] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.258 [2024-04-26 21:32:12.280551] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280554] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae05f0) on tqpair=0xa97320 00:29:23.258 [2024-04-26 21:32:12.280560] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:23.258 [2024-04-26 21:32:12.280563] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:23.258 [2024-04-26 21:32:12.280566] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:23.258 [2024-04-26 21:32:12.280573] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:23.258 [2024-04-26 21:32:12.280576] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:23.258 [2024-04-26 21:32:12.280579] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:23.258 [2024-04-26 21:32:12.280586] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:23.258 [2024-04-26 21:32:12.280591] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280594] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280596] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.280602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:23.258 [2024-04-26 21:32:12.280615] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.258 [2024-04-26 21:32:12.280718] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.258 [2024-04-26 21:32:12.280726] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.258 [2024-04-26 21:32:12.280729] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280731] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae05f0) on tqpair=0xa97320 00:29:23.258 [2024-04-26 21:32:12.280738] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280741] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280743] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.280748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.258 [2024-04-26 21:32:12.280753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280755] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280758] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.280762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.258 [2024-04-26 21:32:12.280767] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280770] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280772] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.280776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.258 [2024-04-26 21:32:12.280781] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280783] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280786] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.280790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.258 [2024-04-26 21:32:12.280793] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:23.258 [2024-04-26 21:32:12.280802] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:23.258 [2024-04-26 21:32:12.280807] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.280809] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.280815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.258 [2024-04-26 21:32:12.280828] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae05f0, cid 0, qid 0 00:29:23.258 [2024-04-26 21:32:12.280832] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0750, cid 1, qid 0 00:29:23.258 [2024-04-26 21:32:12.280836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae08b0, cid 2, qid 0 00:29:23.258 [2024-04-26 21:32:12.280839] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.258 [2024-04-26 21:32:12.280843] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0b70, cid 4, qid 0 00:29:23.258 [2024-04-26 21:32:12.280990] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.258 [2024-04-26 21:32:12.280998] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.258 [2024-04-26 21:32:12.281001] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281003] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0b70) on tqpair=0xa97320 00:29:23.258 [2024-04-26 21:32:12.281007] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:23.258 [2024-04-26 21:32:12.281011] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:23.258 [2024-04-26 21:32:12.281018] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281021] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.281026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.258 [2024-04-26 21:32:12.281038] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0b70, cid 4, qid 0 00:29:23.258 [2024-04-26 21:32:12.281127] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.258 [2024-04-26 21:32:12.281135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.258 [2024-04-26 21:32:12.281137] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281140] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa97320): datao=0, datal=4096, cccid=4 00:29:23.258 [2024-04-26 21:32:12.281143] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae0b70) on tqpair(0xa97320): expected_datao=0, payload_size=4096 00:29:23.258 [2024-04-26 21:32:12.281146] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281151] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281154] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281167] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.258 [2024-04-26 21:32:12.281171] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.258 [2024-04-26 21:32:12.281174] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281176] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0b70) on tqpair=0xa97320 00:29:23.258 [2024-04-26 21:32:12.281185] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:23.258 [2024-04-26 21:32:12.281217] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281223] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.281229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.258 [2024-04-26 21:32:12.281234] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281237] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281240] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa97320) 00:29:23.258 [2024-04-26 21:32:12.281245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.258 [2024-04-26 21:32:12.281262] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0b70, cid 4, qid 0 00:29:23.258 [2024-04-26 21:32:12.281267] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0cd0, cid 5, qid 0 00:29:23.258 [2024-04-26 21:32:12.281431] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.258 [2024-04-26 21:32:12.281439] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.258 [2024-04-26 21:32:12.281441] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281444] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa97320): datao=0, datal=1024, cccid=4 00:29:23.258 [2024-04-26 21:32:12.281447] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae0b70) on tqpair(0xa97320): expected_datao=0, payload_size=1024 00:29:23.258 [2024-04-26 21:32:12.281450] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281456] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281458] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.258 [2024-04-26 21:32:12.281462] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.258 [2024-04-26 21:32:12.281467] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.259 [2024-04-26 21:32:12.281470] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.281474] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0cd0) on tqpair=0xa97320 00:29:23.259 [2024-04-26 21:32:12.326370] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.259 [2024-04-26 21:32:12.326413] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.259 [2024-04-26 21:32:12.326417] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326421] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0b70) on tqpair=0xa97320 00:29:23.259 [2024-04-26 21:32:12.326448] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326451] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa97320) 00:29:23.259 [2024-04-26 21:32:12.326465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.259 [2024-04-26 21:32:12.326507] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0b70, cid 4, qid 0 00:29:23.259 [2024-04-26 21:32:12.326628] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.259 [2024-04-26 21:32:12.326638] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.259 [2024-04-26 21:32:12.326641] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326644] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa97320): datao=0, datal=3072, cccid=4 00:29:23.259 [2024-04-26 21:32:12.326648] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae0b70) on tqpair(0xa97320): expected_datao=0, payload_size=3072 00:29:23.259 [2024-04-26 21:32:12.326652] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326660] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326664] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326671] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.259 [2024-04-26 21:32:12.326675] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.259 [2024-04-26 21:32:12.326678] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326680] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0b70) on tqpair=0xa97320 00:29:23.259 [2024-04-26 21:32:12.326688] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326691] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa97320) 00:29:23.259 [2024-04-26 21:32:12.326697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.259 [2024-04-26 21:32:12.326715] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0b70, cid 4, qid 0 00:29:23.259 [2024-04-26 21:32:12.326797] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.259 [2024-04-26 21:32:12.326807] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.259 [2024-04-26 21:32:12.326809] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326812] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa97320): datao=0, datal=8, cccid=4 00:29:23.259 [2024-04-26 21:32:12.326815] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae0b70) on tqpair(0xa97320): expected_datao=0, payload_size=8 00:29:23.259 [2024-04-26 21:32:12.326818] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326823] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.326825] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.259 ===================================================== 00:29:23.259 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:23.259 ===================================================== 00:29:23.259 Controller Capabilities/Features 00:29:23.259 ================================ 00:29:23.259 Vendor ID: 0000 00:29:23.259 Subsystem Vendor ID: 0000 00:29:23.259 Serial Number: .................... 00:29:23.259 Model Number: ........................................ 00:29:23.259 Firmware Version: 24.05 00:29:23.259 Recommended Arb Burst: 0 00:29:23.259 IEEE OUI Identifier: 00 00 00 00:29:23.259 Multi-path I/O 00:29:23.259 May have multiple subsystem ports: No 00:29:23.259 May have multiple controllers: No 00:29:23.259 Associated with SR-IOV VF: No 00:29:23.259 Max Data Transfer Size: 131072 00:29:23.259 Max Number of Namespaces: 0 00:29:23.259 Max Number of I/O Queues: 1024 00:29:23.259 NVMe Specification Version (VS): 1.3 00:29:23.259 NVMe Specification Version (Identify): 1.3 00:29:23.259 Maximum Queue Entries: 128 00:29:23.259 Contiguous Queues Required: Yes 00:29:23.259 Arbitration Mechanisms Supported 00:29:23.259 Weighted Round Robin: Not Supported 00:29:23.259 Vendor Specific: Not Supported 00:29:23.259 Reset Timeout: 15000 ms 00:29:23.259 Doorbell Stride: 4 bytes 00:29:23.259 NVM Subsystem Reset: Not Supported 00:29:23.259 Command Sets Supported 00:29:23.259 NVM Command Set: Supported 00:29:23.259 Boot Partition: Not Supported 00:29:23.259 Memory Page Size Minimum: 4096 bytes 00:29:23.259 Memory Page Size Maximum: 4096 bytes 00:29:23.259 Persistent Memory Region: Not Supported 00:29:23.259 Optional Asynchronous Events Supported 00:29:23.259 Namespace Attribute Notices: Not Supported 00:29:23.259 Firmware Activation Notices: Not Supported 00:29:23.259 ANA Change Notices: Not Supported 00:29:23.259 PLE Aggregate Log Change Notices: Not Supported 00:29:23.259 LBA Status Info Alert Notices: Not Supported 00:29:23.259 EGE Aggregate Log Change Notices: Not Supported 00:29:23.259 Normal NVM Subsystem Shutdown event: Not Supported 00:29:23.259 Zone Descriptor Change Notices: Not Supported 00:29:23.259 Discovery Log Change Notices: Supported 00:29:23.259 Controller Attributes 00:29:23.259 128-bit Host Identifier: Not Supported 00:29:23.259 Non-Operational Permissive Mode: Not Supported 00:29:23.259 NVM Sets: Not Supported 00:29:23.259 Read Recovery Levels: Not Supported 00:29:23.259 Endurance Groups: Not Supported 00:29:23.259 Predictable Latency Mode: Not Supported 00:29:23.259 Traffic Based Keep ALive: Not Supported 00:29:23.259 Namespace Granularity: Not Supported 00:29:23.259 SQ Associations: Not Supported 00:29:23.259 UUID List: Not Supported 00:29:23.259 Multi-Domain Subsystem: Not Supported 00:29:23.259 Fixed Capacity Management: Not Supported 00:29:23.259 Variable Capacity Management: Not Supported 00:29:23.259 Delete Endurance Group: Not Supported 00:29:23.259 Delete NVM Set: Not Supported 00:29:23.259 Extended LBA Formats Supported: Not Supported 00:29:23.259 Flexible Data Placement Supported: Not Supported 00:29:23.259 00:29:23.259 Controller Memory Buffer Support 00:29:23.259 ================================ 00:29:23.259 Supported: No 00:29:23.259 00:29:23.259 Persistent Memory Region Support 00:29:23.259 ================================ 00:29:23.259 Supported: No 00:29:23.259 00:29:23.259 Admin Command Set Attributes 00:29:23.259 ============================ 00:29:23.259 Security Send/Receive: Not Supported 00:29:23.259 Format NVM: Not Supported 00:29:23.259 Firmware Activate/Download: Not Supported 00:29:23.259 Namespace Management: Not Supported 00:29:23.259 Device Self-Test: Not Supported 00:29:23.259 Directives: Not Supported 00:29:23.259 NVMe-MI: Not Supported 00:29:23.259 Virtualization Management: Not Supported 00:29:23.259 Doorbell Buffer Config: Not Supported 00:29:23.259 Get LBA Status Capability: Not Supported 00:29:23.259 Command & Feature Lockdown Capability: Not Supported 00:29:23.259 Abort Command Limit: 1 00:29:23.259 Async Event Request Limit: 4 00:29:23.259 Number of Firmware Slots: N/A 00:29:23.259 Firmware Slot 1 Read-Only: N/A 00:29:23.259 Firm[2024-04-26 21:32:12.367449] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.259 [2024-04-26 21:32:12.367482] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.259 [2024-04-26 21:32:12.367486] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.367490] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0b70) on tqpair=0xa97320 00:29:23.259 ware Activation Without Reset: N/A 00:29:23.259 Multiple Update Detection Support: N/A 00:29:23.259 Firmware Update Granularity: No Information Provided 00:29:23.259 Per-Namespace SMART Log: No 00:29:23.259 Asymmetric Namespace Access Log Page: Not Supported 00:29:23.259 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:23.259 Command Effects Log Page: Not Supported 00:29:23.259 Get Log Page Extended Data: Supported 00:29:23.259 Telemetry Log Pages: Not Supported 00:29:23.259 Persistent Event Log Pages: Not Supported 00:29:23.259 Supported Log Pages Log Page: May Support 00:29:23.259 Commands Supported & Effects Log Page: Not Supported 00:29:23.259 Feature Identifiers & Effects Log Page:May Support 00:29:23.259 NVMe-MI Commands & Effects Log Page: May Support 00:29:23.259 Data Area 4 for Telemetry Log: Not Supported 00:29:23.259 Error Log Page Entries Supported: 128 00:29:23.259 Keep Alive: Not Supported 00:29:23.259 00:29:23.259 NVM Command Set Attributes 00:29:23.259 ========================== 00:29:23.259 Submission Queue Entry Size 00:29:23.259 Max: 1 00:29:23.259 Min: 1 00:29:23.259 Completion Queue Entry Size 00:29:23.259 Max: 1 00:29:23.259 Min: 1 00:29:23.259 Number of Namespaces: 0 00:29:23.259 Compare Command: Not Supported 00:29:23.259 Write Uncorrectable Command: Not Supported 00:29:23.259 Dataset Management Command: Not Supported 00:29:23.259 Write Zeroes Command: Not Supported 00:29:23.259 Set Features Save Field: Not Supported 00:29:23.259 Reservations: Not Supported 00:29:23.259 Timestamp: Not Supported 00:29:23.259 Copy: Not Supported 00:29:23.259 Volatile Write Cache: Not Present 00:29:23.259 Atomic Write Unit (Normal): 1 00:29:23.259 Atomic Write Unit (PFail): 1 00:29:23.259 Atomic Compare & Write Unit: 1 00:29:23.259 Fused Compare & Write: Supported 00:29:23.259 Scatter-Gather List 00:29:23.259 SGL Command Set: Supported 00:29:23.259 SGL Keyed: Supported 00:29:23.259 SGL Bit Bucket Descriptor: Not Supported 00:29:23.259 SGL Metadata Pointer: Not Supported 00:29:23.259 Oversized SGL: Not Supported 00:29:23.259 SGL Metadata Address: Not Supported 00:29:23.259 SGL Offset: Supported 00:29:23.259 Transport SGL Data Block: Not Supported 00:29:23.259 Replay Protected Memory Block: Not Supported 00:29:23.259 00:29:23.259 Firmware Slot Information 00:29:23.259 ========================= 00:29:23.259 Active slot: 0 00:29:23.259 00:29:23.259 00:29:23.259 Error Log 00:29:23.259 ========= 00:29:23.259 00:29:23.259 Active Namespaces 00:29:23.259 ================= 00:29:23.259 Discovery Log Page 00:29:23.259 ================== 00:29:23.259 Generation Counter: 2 00:29:23.259 Number of Records: 2 00:29:23.259 Record Format: 0 00:29:23.259 00:29:23.259 Discovery Log Entry 0 00:29:23.259 ---------------------- 00:29:23.259 Transport Type: 3 (TCP) 00:29:23.259 Address Family: 1 (IPv4) 00:29:23.259 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:23.259 Entry Flags: 00:29:23.259 Duplicate Returned Information: 1 00:29:23.259 Explicit Persistent Connection Support for Discovery: 1 00:29:23.259 Transport Requirements: 00:29:23.259 Secure Channel: Not Required 00:29:23.259 Port ID: 0 (0x0000) 00:29:23.259 Controller ID: 65535 (0xffff) 00:29:23.259 Admin Max SQ Size: 128 00:29:23.259 Transport Service Identifier: 4420 00:29:23.259 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:23.259 Transport Address: 10.0.0.2 00:29:23.259 Discovery Log Entry 1 00:29:23.259 ---------------------- 00:29:23.259 Transport Type: 3 (TCP) 00:29:23.259 Address Family: 1 (IPv4) 00:29:23.259 Subsystem Type: 2 (NVM Subsystem) 00:29:23.259 Entry Flags: 00:29:23.259 Duplicate Returned Information: 0 00:29:23.259 Explicit Persistent Connection Support for Discovery: 0 00:29:23.259 Transport Requirements: 00:29:23.259 Secure Channel: Not Required 00:29:23.259 Port ID: 0 (0x0000) 00:29:23.259 Controller ID: 65535 (0xffff) 00:29:23.259 Admin Max SQ Size: 128 00:29:23.259 Transport Service Identifier: 4420 00:29:23.259 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:23.259 Transport Address: 10.0.0.2 [2024-04-26 21:32:12.367643] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:23.259 [2024-04-26 21:32:12.367657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.259 [2024-04-26 21:32:12.367663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.259 [2024-04-26 21:32:12.367669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.259 [2024-04-26 21:32:12.367673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.259 [2024-04-26 21:32:12.367685] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.367689] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.259 [2024-04-26 21:32:12.367692] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.367703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.367728] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.367844] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.367854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.367857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.367860] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.367871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.367875] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.367877] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.367883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.367902] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.368012] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.368022] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.368025] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368028] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.368032] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:23.260 [2024-04-26 21:32:12.368036] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:23.260 [2024-04-26 21:32:12.368043] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368046] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368049] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.368055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.368068] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.368137] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.368145] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.368148] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368151] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.368159] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368163] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368165] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.368171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.368183] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.368256] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.368264] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.368267] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368270] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.368278] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368281] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368283] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.368289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.368302] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.368395] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.368404] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.368407] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368410] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.368418] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368421] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368424] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.368429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.368443] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.368504] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.368512] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.368514] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368517] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.368525] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368528] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368531] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.368537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.368554] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.368623] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.368633] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.368636] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368639] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.368647] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368650] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.368659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.368674] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.368733] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.368741] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.368743] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368746] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.368754] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368757] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368760] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.368766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.368779] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.368842] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.368850] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.368853] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368856] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.368864] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368867] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368870] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.368875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.368888] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.368979] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.368987] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.368990] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.368992] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.369000] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369003] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369006] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.369012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.369024] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.369090] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.369102] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.369106] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369110] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.369120] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369125] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369130] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.369138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.369158] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.369251] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.369263] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.369268] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369272] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.369283] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369287] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369291] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.369298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.369317] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.369400] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.369412] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.369416] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369420] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.260 [2024-04-26 21:32:12.369431] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369436] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369440] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.260 [2024-04-26 21:32:12.369448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.260 [2024-04-26 21:32:12.369467] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.260 [2024-04-26 21:32:12.369541] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.260 [2024-04-26 21:32:12.369551] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.260 [2024-04-26 21:32:12.369555] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.260 [2024-04-26 21:32:12.369560] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.261 [2024-04-26 21:32:12.369571] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.369576] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.369581] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.261 [2024-04-26 21:32:12.369589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.261 [2024-04-26 21:32:12.369608] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.261 [2024-04-26 21:32:12.369668] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.261 [2024-04-26 21:32:12.369679] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.261 [2024-04-26 21:32:12.369684] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.369688] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.261 [2024-04-26 21:32:12.369699] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.369704] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.369707] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.261 [2024-04-26 21:32:12.369715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.261 [2024-04-26 21:32:12.369733] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.261 [2024-04-26 21:32:12.369829] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.261 [2024-04-26 21:32:12.369841] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.261 [2024-04-26 21:32:12.369845] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.369850] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.261 [2024-04-26 21:32:12.369861] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.369866] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.369870] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.261 [2024-04-26 21:32:12.369879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.261 [2024-04-26 21:32:12.369897] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.261 [2024-04-26 21:32:12.369965] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.261 [2024-04-26 21:32:12.369976] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.261 [2024-04-26 21:32:12.369981] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.369986] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.261 [2024-04-26 21:32:12.369996] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.370001] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.370006] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.261 [2024-04-26 21:32:12.370014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.261 [2024-04-26 21:32:12.370032] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.261 [2024-04-26 21:32:12.370099] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.261 [2024-04-26 21:32:12.370110] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.261 [2024-04-26 21:32:12.370115] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.370120] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.261 [2024-04-26 21:32:12.370131] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.370136] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.370139] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.261 [2024-04-26 21:32:12.370148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.261 [2024-04-26 21:32:12.370168] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.261 [2024-04-26 21:32:12.370256] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.261 [2024-04-26 21:32:12.370268] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.261 [2024-04-26 21:32:12.370271] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.370274] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.261 [2024-04-26 21:32:12.370283] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.370286] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.370289] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.261 [2024-04-26 21:32:12.370295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.261 [2024-04-26 21:32:12.370312] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.261 [2024-04-26 21:32:12.374349] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.261 [2024-04-26 21:32:12.374393] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.261 [2024-04-26 21:32:12.374397] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.374400] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.261 [2024-04-26 21:32:12.374415] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.374419] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.374421] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa97320) 00:29:23.261 [2024-04-26 21:32:12.374429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.261 [2024-04-26 21:32:12.374457] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae0a10, cid 3, qid 0 00:29:23.261 [2024-04-26 21:32:12.374536] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.261 [2024-04-26 21:32:12.374547] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.261 [2024-04-26 21:32:12.374551] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.261 [2024-04-26 21:32:12.374555] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xae0a10) on tqpair=0xa97320 00:29:23.261 [2024-04-26 21:32:12.374564] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:29:23.261 00:29:23.261 21:32:12 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:23.261 [2024-04-26 21:32:12.410428] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:23.261 [2024-04-26 21:32:12.410469] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98801 ] 00:29:23.526 [2024-04-26 21:32:12.543116] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:23.526 [2024-04-26 21:32:12.543181] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:23.526 [2024-04-26 21:32:12.543186] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:23.526 [2024-04-26 21:32:12.543197] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:23.526 [2024-04-26 21:32:12.543205] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:23.526 [2024-04-26 21:32:12.543341] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:23.526 [2024-04-26 21:32:12.543386] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18e4320 0 00:29:23.526 [2024-04-26 21:32:12.557351] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:23.526 [2024-04-26 21:32:12.557377] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:23.526 [2024-04-26 21:32:12.557381] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:23.526 [2024-04-26 21:32:12.557384] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:23.526 [2024-04-26 21:32:12.557426] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.526 [2024-04-26 21:32:12.557435] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.526 [2024-04-26 21:32:12.557438] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.526 [2024-04-26 21:32:12.557451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:23.526 [2024-04-26 21:32:12.557483] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.526 [2024-04-26 21:32:12.565345] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.526 [2024-04-26 21:32:12.565360] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.526 [2024-04-26 21:32:12.565363] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.526 [2024-04-26 21:32:12.565367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192d5f0) on tqpair=0x18e4320 00:29:23.526 [2024-04-26 21:32:12.565376] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:23.526 [2024-04-26 21:32:12.565382] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:23.526 [2024-04-26 21:32:12.565387] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:23.526 [2024-04-26 21:32:12.565401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.526 [2024-04-26 21:32:12.565404] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.526 [2024-04-26 21:32:12.565407] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.526 [2024-04-26 21:32:12.565414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-26 21:32:12.565437] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.526 [2024-04-26 21:32:12.565498] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.526 [2024-04-26 21:32:12.565507] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.526 [2024-04-26 21:32:12.565510] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.526 [2024-04-26 21:32:12.565512] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192d5f0) on tqpair=0x18e4320 00:29:23.526 [2024-04-26 21:32:12.565520] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:23.526 [2024-04-26 21:32:12.565526] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:23.526 [2024-04-26 21:32:12.565531] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.526 [2024-04-26 21:32:12.565534] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.526 [2024-04-26 21:32:12.565536] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.565542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.527 [2024-04-26 21:32:12.565557] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.527 [2024-04-26 21:32:12.565623] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.527 [2024-04-26 21:32:12.565634] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.527 [2024-04-26 21:32:12.565637] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.565639] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192d5f0) on tqpair=0x18e4320 00:29:23.527 [2024-04-26 21:32:12.565644] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:23.527 [2024-04-26 21:32:12.565650] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:23.527 [2024-04-26 21:32:12.565655] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.565658] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.565660] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.565666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.527 [2024-04-26 21:32:12.565680] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.527 [2024-04-26 21:32:12.565758] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.527 [2024-04-26 21:32:12.565766] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.527 [2024-04-26 21:32:12.565769] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.565772] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192d5f0) on tqpair=0x18e4320 00:29:23.527 [2024-04-26 21:32:12.565777] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:23.527 [2024-04-26 21:32:12.565784] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.565787] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.565789] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.565795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.527 [2024-04-26 21:32:12.565807] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.527 [2024-04-26 21:32:12.565870] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.527 [2024-04-26 21:32:12.565877] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.527 [2024-04-26 21:32:12.565880] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.565883] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192d5f0) on tqpair=0x18e4320 00:29:23.527 [2024-04-26 21:32:12.565887] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:23.527 [2024-04-26 21:32:12.565890] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:23.527 [2024-04-26 21:32:12.565896] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:23.527 [2024-04-26 21:32:12.566000] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:23.527 [2024-04-26 21:32:12.566007] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:23.527 [2024-04-26 21:32:12.566015] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566017] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566020] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.566025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.527 [2024-04-26 21:32:12.566038] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.527 [2024-04-26 21:32:12.566103] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.527 [2024-04-26 21:32:12.566110] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.527 [2024-04-26 21:32:12.566113] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566116] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192d5f0) on tqpair=0x18e4320 00:29:23.527 [2024-04-26 21:32:12.566120] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:23.527 [2024-04-26 21:32:12.566127] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566129] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566132] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.566137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.527 [2024-04-26 21:32:12.566149] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.527 [2024-04-26 21:32:12.566220] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.527 [2024-04-26 21:32:12.566228] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.527 [2024-04-26 21:32:12.566231] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566233] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192d5f0) on tqpair=0x18e4320 00:29:23.527 [2024-04-26 21:32:12.566238] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:23.527 [2024-04-26 21:32:12.566241] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:23.527 [2024-04-26 21:32:12.566247] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:23.527 [2024-04-26 21:32:12.566254] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:23.527 [2024-04-26 21:32:12.566262] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566264] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.566270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.527 [2024-04-26 21:32:12.566282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.527 [2024-04-26 21:32:12.566393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.527 [2024-04-26 21:32:12.566402] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.527 [2024-04-26 21:32:12.566405] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566408] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e4320): datao=0, datal=4096, cccid=0 00:29:23.527 [2024-04-26 21:32:12.566411] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x192d5f0) on tqpair(0x18e4320): expected_datao=0, payload_size=4096 00:29:23.527 [2024-04-26 21:32:12.566414] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566421] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566425] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566431] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.527 [2024-04-26 21:32:12.566436] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.527 [2024-04-26 21:32:12.566439] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566441] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192d5f0) on tqpair=0x18e4320 00:29:23.527 [2024-04-26 21:32:12.566448] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:23.527 [2024-04-26 21:32:12.566451] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:23.527 [2024-04-26 21:32:12.566454] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:23.527 [2024-04-26 21:32:12.566460] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:23.527 [2024-04-26 21:32:12.566464] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:23.527 [2024-04-26 21:32:12.566467] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:23.527 [2024-04-26 21:32:12.566474] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:23.527 [2024-04-26 21:32:12.566479] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566482] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566484] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.566490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:23.527 [2024-04-26 21:32:12.566504] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.527 [2024-04-26 21:32:12.566573] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.527 [2024-04-26 21:32:12.566582] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.527 [2024-04-26 21:32:12.566585] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566588] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192d5f0) on tqpair=0x18e4320 00:29:23.527 [2024-04-26 21:32:12.566594] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566597] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566599] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.566604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.527 [2024-04-26 21:32:12.566609] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566611] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566614] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.566618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.527 [2024-04-26 21:32:12.566623] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.527 [2024-04-26 21:32:12.566628] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18e4320) 00:29:23.527 [2024-04-26 21:32:12.566633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.527 [2024-04-26 21:32:12.566638] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.566640] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.566643] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.528 [2024-04-26 21:32:12.566647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.528 [2024-04-26 21:32:12.566651] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.566660] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.566665] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.566667] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e4320) 00:29:23.528 [2024-04-26 21:32:12.566673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.528 [2024-04-26 21:32:12.566688] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d5f0, cid 0, qid 0 00:29:23.528 [2024-04-26 21:32:12.566692] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d750, cid 1, qid 0 00:29:23.528 [2024-04-26 21:32:12.566696] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192d8b0, cid 2, qid 0 00:29:23.528 [2024-04-26 21:32:12.566699] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.528 [2024-04-26 21:32:12.566703] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192db70, cid 4, qid 0 00:29:23.528 [2024-04-26 21:32:12.566824] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.528 [2024-04-26 21:32:12.566834] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.528 [2024-04-26 21:32:12.566837] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.566840] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192db70) on tqpair=0x18e4320 00:29:23.528 [2024-04-26 21:32:12.566845] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:23.528 [2024-04-26 21:32:12.566849] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.566855] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.566859] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.566864] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.566867] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.566869] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e4320) 00:29:23.528 [2024-04-26 21:32:12.566874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:23.528 [2024-04-26 21:32:12.566887] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192db70, cid 4, qid 0 00:29:23.528 [2024-04-26 21:32:12.566952] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.528 [2024-04-26 21:32:12.566960] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.528 [2024-04-26 21:32:12.566962] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.566965] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192db70) on tqpair=0x18e4320 00:29:23.528 [2024-04-26 21:32:12.567010] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567020] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567026] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567029] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e4320) 00:29:23.528 [2024-04-26 21:32:12.567034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.528 [2024-04-26 21:32:12.567046] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192db70, cid 4, qid 0 00:29:23.528 [2024-04-26 21:32:12.567116] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.528 [2024-04-26 21:32:12.567124] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.528 [2024-04-26 21:32:12.567127] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567129] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e4320): datao=0, datal=4096, cccid=4 00:29:23.528 [2024-04-26 21:32:12.567132] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x192db70) on tqpair(0x18e4320): expected_datao=0, payload_size=4096 00:29:23.528 [2024-04-26 21:32:12.567136] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567141] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567144] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567150] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.528 [2024-04-26 21:32:12.567155] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.528 [2024-04-26 21:32:12.567158] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567160] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192db70) on tqpair=0x18e4320 00:29:23.528 [2024-04-26 21:32:12.567168] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:23.528 [2024-04-26 21:32:12.567177] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567184] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567189] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567192] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e4320) 00:29:23.528 [2024-04-26 21:32:12.567197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.528 [2024-04-26 21:32:12.567210] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192db70, cid 4, qid 0 00:29:23.528 [2024-04-26 21:32:12.567306] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.528 [2024-04-26 21:32:12.567314] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.528 [2024-04-26 21:32:12.567316] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567319] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e4320): datao=0, datal=4096, cccid=4 00:29:23.528 [2024-04-26 21:32:12.567322] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x192db70) on tqpair(0x18e4320): expected_datao=0, payload_size=4096 00:29:23.528 [2024-04-26 21:32:12.567325] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567339] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567343] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567349] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.528 [2024-04-26 21:32:12.567354] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.528 [2024-04-26 21:32:12.567356] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567359] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192db70) on tqpair=0x18e4320 00:29:23.528 [2024-04-26 21:32:12.567371] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567378] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567383] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567385] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e4320) 00:29:23.528 [2024-04-26 21:32:12.567390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.528 [2024-04-26 21:32:12.567403] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192db70, cid 4, qid 0 00:29:23.528 [2024-04-26 21:32:12.567484] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.528 [2024-04-26 21:32:12.567491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.528 [2024-04-26 21:32:12.567494] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567497] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e4320): datao=0, datal=4096, cccid=4 00:29:23.528 [2024-04-26 21:32:12.567500] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x192db70) on tqpair(0x18e4320): expected_datao=0, payload_size=4096 00:29:23.528 [2024-04-26 21:32:12.567503] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567508] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567511] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567517] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.528 [2024-04-26 21:32:12.567521] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.528 [2024-04-26 21:32:12.567524] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567527] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192db70) on tqpair=0x18e4320 00:29:23.528 [2024-04-26 21:32:12.567533] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567538] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567546] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567551] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567556] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567562] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:23.528 [2024-04-26 21:32:12.567567] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:23.528 [2024-04-26 21:32:12.567572] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:23.528 [2024-04-26 21:32:12.567596] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567599] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e4320) 00:29:23.528 [2024-04-26 21:32:12.567605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.528 [2024-04-26 21:32:12.567611] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.528 [2024-04-26 21:32:12.567614] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.567616] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e4320) 00:29:23.529 [2024-04-26 21:32:12.567621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.529 [2024-04-26 21:32:12.567639] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192db70, cid 4, qid 0 00:29:23.529 [2024-04-26 21:32:12.567644] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192dcd0, cid 5, qid 0 00:29:23.529 [2024-04-26 21:32:12.567721] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.529 [2024-04-26 21:32:12.567729] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.529 [2024-04-26 21:32:12.567732] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.567735] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192db70) on tqpair=0x18e4320 00:29:23.529 [2024-04-26 21:32:12.567741] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.529 [2024-04-26 21:32:12.567745] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.529 [2024-04-26 21:32:12.567748] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.567750] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192dcd0) on tqpair=0x18e4320 00:29:23.529 [2024-04-26 21:32:12.567757] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.567760] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e4320) 00:29:23.529 [2024-04-26 21:32:12.567765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.529 [2024-04-26 21:32:12.567777] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192dcd0, cid 5, qid 0 00:29:23.529 [2024-04-26 21:32:12.567835] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.529 [2024-04-26 21:32:12.567843] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.529 [2024-04-26 21:32:12.567845] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.567848] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192dcd0) on tqpair=0x18e4320 00:29:23.529 [2024-04-26 21:32:12.567855] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.567858] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e4320) 00:29:23.529 [2024-04-26 21:32:12.567863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.529 [2024-04-26 21:32:12.567874] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192dcd0, cid 5, qid 0 00:29:23.529 [2024-04-26 21:32:12.567943] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.529 [2024-04-26 21:32:12.567950] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.529 [2024-04-26 21:32:12.567953] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.567956] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192dcd0) on tqpair=0x18e4320 00:29:23.529 [2024-04-26 21:32:12.567963] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.567966] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e4320) 00:29:23.529 [2024-04-26 21:32:12.567971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.529 [2024-04-26 21:32:12.567982] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192dcd0, cid 5, qid 0 00:29:23.529 [2024-04-26 21:32:12.568050] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.529 [2024-04-26 21:32:12.568058] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.529 [2024-04-26 21:32:12.568061] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568063] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192dcd0) on tqpair=0x18e4320 00:29:23.529 [2024-04-26 21:32:12.568072] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568075] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18e4320) 00:29:23.529 [2024-04-26 21:32:12.568080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.529 [2024-04-26 21:32:12.568086] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568088] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18e4320) 00:29:23.529 [2024-04-26 21:32:12.568093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.529 [2024-04-26 21:32:12.568099] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568101] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18e4320) 00:29:23.529 [2024-04-26 21:32:12.568106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.529 [2024-04-26 21:32:12.568112] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568114] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18e4320) 00:29:23.529 [2024-04-26 21:32:12.568119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.529 [2024-04-26 21:32:12.568133] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192dcd0, cid 5, qid 0 00:29:23.529 [2024-04-26 21:32:12.568137] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192db70, cid 4, qid 0 00:29:23.529 [2024-04-26 21:32:12.568140] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192de30, cid 6, qid 0 00:29:23.529 [2024-04-26 21:32:12.568144] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192df90, cid 7, qid 0 00:29:23.529 [2024-04-26 21:32:12.568300] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.529 [2024-04-26 21:32:12.568308] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.529 [2024-04-26 21:32:12.568311] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568313] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e4320): datao=0, datal=8192, cccid=5 00:29:23.529 [2024-04-26 21:32:12.568316] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x192dcd0) on tqpair(0x18e4320): expected_datao=0, payload_size=8192 00:29:23.529 [2024-04-26 21:32:12.568319] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568342] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568346] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568350] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.529 [2024-04-26 21:32:12.568355] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.529 [2024-04-26 21:32:12.568357] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568359] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e4320): datao=0, datal=512, cccid=4 00:29:23.529 [2024-04-26 21:32:12.568363] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x192db70) on tqpair(0x18e4320): expected_datao=0, payload_size=512 00:29:23.529 [2024-04-26 21:32:12.568365] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568370] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568373] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568377] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.529 [2024-04-26 21:32:12.568381] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.529 [2024-04-26 21:32:12.568384] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568386] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e4320): datao=0, datal=512, cccid=6 00:29:23.529 [2024-04-26 21:32:12.568389] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x192de30) on tqpair(0x18e4320): expected_datao=0, payload_size=512 00:29:23.529 [2024-04-26 21:32:12.568392] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568397] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568400] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568404] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:23.529 [2024-04-26 21:32:12.568408] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:23.529 [2024-04-26 21:32:12.568411] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568413] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18e4320): datao=0, datal=4096, cccid=7 00:29:23.529 [2024-04-26 21:32:12.568416] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x192df90) on tqpair(0x18e4320): expected_datao=0, payload_size=4096 00:29:23.529 [2024-04-26 21:32:12.568419] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568425] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568427] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568433] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.529 [2024-04-26 21:32:12.568438] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.529 [2024-04-26 21:32:12.568441] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568443] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192dcd0) on tqpair=0x18e4320 00:29:23.529 [2024-04-26 21:32:12.568458] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.529 [2024-04-26 21:32:12.568463] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.529 [2024-04-26 21:32:12.568466] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568468] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192db70) on tqpair=0x18e4320 00:29:23.529 [2024-04-26 21:32:12.568476] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.529 [2024-04-26 21:32:12.568481] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.529 [2024-04-26 21:32:12.568483] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568486] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192de30) on tqpair=0x18e4320 00:29:23.529 [2024-04-26 21:32:12.568492] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.529 [2024-04-26 21:32:12.568496] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.529 [2024-04-26 21:32:12.568498] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.529 [2024-04-26 21:32:12.568501] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192df90) on tqpair=0x18e4320 00:29:23.529 ===================================================== 00:29:23.529 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:23.529 ===================================================== 00:29:23.529 Controller Capabilities/Features 00:29:23.529 ================================ 00:29:23.529 Vendor ID: 8086 00:29:23.529 Subsystem Vendor ID: 8086 00:29:23.529 Serial Number: SPDK00000000000001 00:29:23.529 Model Number: SPDK bdev Controller 00:29:23.530 Firmware Version: 24.05 00:29:23.530 Recommended Arb Burst: 6 00:29:23.530 IEEE OUI Identifier: e4 d2 5c 00:29:23.530 Multi-path I/O 00:29:23.530 May have multiple subsystem ports: Yes 00:29:23.530 May have multiple controllers: Yes 00:29:23.530 Associated with SR-IOV VF: No 00:29:23.530 Max Data Transfer Size: 131072 00:29:23.530 Max Number of Namespaces: 32 00:29:23.530 Max Number of I/O Queues: 127 00:29:23.530 NVMe Specification Version (VS): 1.3 00:29:23.530 NVMe Specification Version (Identify): 1.3 00:29:23.530 Maximum Queue Entries: 128 00:29:23.530 Contiguous Queues Required: Yes 00:29:23.530 Arbitration Mechanisms Supported 00:29:23.530 Weighted Round Robin: Not Supported 00:29:23.530 Vendor Specific: Not Supported 00:29:23.530 Reset Timeout: 15000 ms 00:29:23.530 Doorbell Stride: 4 bytes 00:29:23.530 NVM Subsystem Reset: Not Supported 00:29:23.530 Command Sets Supported 00:29:23.530 NVM Command Set: Supported 00:29:23.530 Boot Partition: Not Supported 00:29:23.530 Memory Page Size Minimum: 4096 bytes 00:29:23.530 Memory Page Size Maximum: 4096 bytes 00:29:23.530 Persistent Memory Region: Not Supported 00:29:23.530 Optional Asynchronous Events Supported 00:29:23.530 Namespace Attribute Notices: Supported 00:29:23.530 Firmware Activation Notices: Not Supported 00:29:23.530 ANA Change Notices: Not Supported 00:29:23.530 PLE Aggregate Log Change Notices: Not Supported 00:29:23.530 LBA Status Info Alert Notices: Not Supported 00:29:23.530 EGE Aggregate Log Change Notices: Not Supported 00:29:23.530 Normal NVM Subsystem Shutdown event: Not Supported 00:29:23.530 Zone Descriptor Change Notices: Not Supported 00:29:23.530 Discovery Log Change Notices: Not Supported 00:29:23.530 Controller Attributes 00:29:23.530 128-bit Host Identifier: Supported 00:29:23.530 Non-Operational Permissive Mode: Not Supported 00:29:23.530 NVM Sets: Not Supported 00:29:23.530 Read Recovery Levels: Not Supported 00:29:23.530 Endurance Groups: Not Supported 00:29:23.530 Predictable Latency Mode: Not Supported 00:29:23.530 Traffic Based Keep ALive: Not Supported 00:29:23.530 Namespace Granularity: Not Supported 00:29:23.530 SQ Associations: Not Supported 00:29:23.530 UUID List: Not Supported 00:29:23.530 Multi-Domain Subsystem: Not Supported 00:29:23.530 Fixed Capacity Management: Not Supported 00:29:23.530 Variable Capacity Management: Not Supported 00:29:23.530 Delete Endurance Group: Not Supported 00:29:23.530 Delete NVM Set: Not Supported 00:29:23.530 Extended LBA Formats Supported: Not Supported 00:29:23.530 Flexible Data Placement Supported: Not Supported 00:29:23.530 00:29:23.530 Controller Memory Buffer Support 00:29:23.530 ================================ 00:29:23.530 Supported: No 00:29:23.530 00:29:23.530 Persistent Memory Region Support 00:29:23.530 ================================ 00:29:23.530 Supported: No 00:29:23.530 00:29:23.530 Admin Command Set Attributes 00:29:23.530 ============================ 00:29:23.530 Security Send/Receive: Not Supported 00:29:23.530 Format NVM: Not Supported 00:29:23.530 Firmware Activate/Download: Not Supported 00:29:23.530 Namespace Management: Not Supported 00:29:23.530 Device Self-Test: Not Supported 00:29:23.530 Directives: Not Supported 00:29:23.530 NVMe-MI: Not Supported 00:29:23.530 Virtualization Management: Not Supported 00:29:23.530 Doorbell Buffer Config: Not Supported 00:29:23.530 Get LBA Status Capability: Not Supported 00:29:23.530 Command & Feature Lockdown Capability: Not Supported 00:29:23.530 Abort Command Limit: 4 00:29:23.530 Async Event Request Limit: 4 00:29:23.530 Number of Firmware Slots: N/A 00:29:23.530 Firmware Slot 1 Read-Only: N/A 00:29:23.530 Firmware Activation Without Reset: N/A 00:29:23.530 Multiple Update Detection Support: N/A 00:29:23.530 Firmware Update Granularity: No Information Provided 00:29:23.530 Per-Namespace SMART Log: No 00:29:23.530 Asymmetric Namespace Access Log Page: Not Supported 00:29:23.530 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:23.530 Command Effects Log Page: Supported 00:29:23.530 Get Log Page Extended Data: Supported 00:29:23.530 Telemetry Log Pages: Not Supported 00:29:23.530 Persistent Event Log Pages: Not Supported 00:29:23.530 Supported Log Pages Log Page: May Support 00:29:23.530 Commands Supported & Effects Log Page: Not Supported 00:29:23.530 Feature Identifiers & Effects Log Page:May Support 00:29:23.530 NVMe-MI Commands & Effects Log Page: May Support 00:29:23.530 Data Area 4 for Telemetry Log: Not Supported 00:29:23.530 Error Log Page Entries Supported: 128 00:29:23.530 Keep Alive: Supported 00:29:23.530 Keep Alive Granularity: 10000 ms 00:29:23.530 00:29:23.530 NVM Command Set Attributes 00:29:23.530 ========================== 00:29:23.530 Submission Queue Entry Size 00:29:23.530 Max: 64 00:29:23.530 Min: 64 00:29:23.530 Completion Queue Entry Size 00:29:23.530 Max: 16 00:29:23.530 Min: 16 00:29:23.530 Number of Namespaces: 32 00:29:23.530 Compare Command: Supported 00:29:23.530 Write Uncorrectable Command: Not Supported 00:29:23.530 Dataset Management Command: Supported 00:29:23.530 Write Zeroes Command: Supported 00:29:23.530 Set Features Save Field: Not Supported 00:29:23.530 Reservations: Supported 00:29:23.530 Timestamp: Not Supported 00:29:23.530 Copy: Supported 00:29:23.530 Volatile Write Cache: Present 00:29:23.530 Atomic Write Unit (Normal): 1 00:29:23.530 Atomic Write Unit (PFail): 1 00:29:23.530 Atomic Compare & Write Unit: 1 00:29:23.530 Fused Compare & Write: Supported 00:29:23.530 Scatter-Gather List 00:29:23.530 SGL Command Set: Supported 00:29:23.530 SGL Keyed: Supported 00:29:23.530 SGL Bit Bucket Descriptor: Not Supported 00:29:23.530 SGL Metadata Pointer: Not Supported 00:29:23.530 Oversized SGL: Not Supported 00:29:23.530 SGL Metadata Address: Not Supported 00:29:23.530 SGL Offset: Supported 00:29:23.530 Transport SGL Data Block: Not Supported 00:29:23.530 Replay Protected Memory Block: Not Supported 00:29:23.530 00:29:23.530 Firmware Slot Information 00:29:23.530 ========================= 00:29:23.530 Active slot: 1 00:29:23.530 Slot 1 Firmware Revision: 24.05 00:29:23.530 00:29:23.530 00:29:23.530 Commands Supported and Effects 00:29:23.530 ============================== 00:29:23.530 Admin Commands 00:29:23.530 -------------- 00:29:23.530 Get Log Page (02h): Supported 00:29:23.530 Identify (06h): Supported 00:29:23.530 Abort (08h): Supported 00:29:23.530 Set Features (09h): Supported 00:29:23.530 Get Features (0Ah): Supported 00:29:23.530 Asynchronous Event Request (0Ch): Supported 00:29:23.530 Keep Alive (18h): Supported 00:29:23.530 I/O Commands 00:29:23.530 ------------ 00:29:23.530 Flush (00h): Supported LBA-Change 00:29:23.530 Write (01h): Supported LBA-Change 00:29:23.530 Read (02h): Supported 00:29:23.530 Compare (05h): Supported 00:29:23.530 Write Zeroes (08h): Supported LBA-Change 00:29:23.530 Dataset Management (09h): Supported LBA-Change 00:29:23.530 Copy (19h): Supported LBA-Change 00:29:23.530 Unknown (79h): Supported LBA-Change 00:29:23.530 Unknown (7Ah): Supported 00:29:23.530 00:29:23.530 Error Log 00:29:23.530 ========= 00:29:23.530 00:29:23.530 Arbitration 00:29:23.530 =========== 00:29:23.530 Arbitration Burst: 1 00:29:23.530 00:29:23.530 Power Management 00:29:23.530 ================ 00:29:23.530 Number of Power States: 1 00:29:23.530 Current Power State: Power State #0 00:29:23.530 Power State #0: 00:29:23.530 Max Power: 0.00 W 00:29:23.530 Non-Operational State: Operational 00:29:23.530 Entry Latency: Not Reported 00:29:23.530 Exit Latency: Not Reported 00:29:23.530 Relative Read Throughput: 0 00:29:23.530 Relative Read Latency: 0 00:29:23.530 Relative Write Throughput: 0 00:29:23.530 Relative Write Latency: 0 00:29:23.530 Idle Power: Not Reported 00:29:23.530 Active Power: Not Reported 00:29:23.530 Non-Operational Permissive Mode: Not Supported 00:29:23.530 00:29:23.530 Health Information 00:29:23.530 ================== 00:29:23.530 Critical Warnings: 00:29:23.530 Available Spare Space: OK 00:29:23.530 Temperature: OK 00:29:23.530 Device Reliability: OK 00:29:23.530 Read Only: No 00:29:23.530 Volatile Memory Backup: OK 00:29:23.530 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:23.530 Temperature Threshold: [2024-04-26 21:32:12.568612] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.530 [2024-04-26 21:32:12.568617] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18e4320) 00:29:23.530 [2024-04-26 21:32:12.568623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.530 [2024-04-26 21:32:12.568641] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192df90, cid 7, qid 0 00:29:23.530 [2024-04-26 21:32:12.568711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.530 [2024-04-26 21:32:12.568719] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.530 [2024-04-26 21:32:12.568722] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.568725] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192df90) on tqpair=0x18e4320 00:29:23.531 [2024-04-26 21:32:12.568753] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:23.531 [2024-04-26 21:32:12.568765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.531 [2024-04-26 21:32:12.568770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.531 [2024-04-26 21:32:12.568775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.531 [2024-04-26 21:32:12.568780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.531 [2024-04-26 21:32:12.568786] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.568789] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.568792] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.531 [2024-04-26 21:32:12.568797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.531 [2024-04-26 21:32:12.568812] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.531 [2024-04-26 21:32:12.568878] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.531 [2024-04-26 21:32:12.568886] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.531 [2024-04-26 21:32:12.568888] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.568891] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.531 [2024-04-26 21:32:12.568897] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.568900] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.568903] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.531 [2024-04-26 21:32:12.568908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.531 [2024-04-26 21:32:12.568922] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.531 [2024-04-26 21:32:12.569002] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.531 [2024-04-26 21:32:12.569010] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.531 [2024-04-26 21:32:12.569012] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569015] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.531 [2024-04-26 21:32:12.569019] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:23.531 [2024-04-26 21:32:12.569022] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:23.531 [2024-04-26 21:32:12.569029] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569032] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569034] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.531 [2024-04-26 21:32:12.569040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.531 [2024-04-26 21:32:12.569051] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.531 [2024-04-26 21:32:12.569111] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.531 [2024-04-26 21:32:12.569119] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.531 [2024-04-26 21:32:12.569121] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569124] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.531 [2024-04-26 21:32:12.569133] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569135] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569138] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.531 [2024-04-26 21:32:12.569143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.531 [2024-04-26 21:32:12.569155] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.531 [2024-04-26 21:32:12.569212] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.531 [2024-04-26 21:32:12.569219] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.531 [2024-04-26 21:32:12.569222] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569225] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.531 [2024-04-26 21:32:12.569233] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569236] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569238] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.531 [2024-04-26 21:32:12.569243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.531 [2024-04-26 21:32:12.569255] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.531 [2024-04-26 21:32:12.569340] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.531 [2024-04-26 21:32:12.569346] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.531 [2024-04-26 21:32:12.569348] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569351] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.531 [2024-04-26 21:32:12.569359] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569362] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569365] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.531 [2024-04-26 21:32:12.569370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.531 [2024-04-26 21:32:12.569382] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.531 [2024-04-26 21:32:12.569443] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.531 [2024-04-26 21:32:12.569451] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.531 [2024-04-26 21:32:12.569453] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569456] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.531 [2024-04-26 21:32:12.569463] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569466] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569469] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.531 [2024-04-26 21:32:12.569474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.531 [2024-04-26 21:32:12.569486] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.531 [2024-04-26 21:32:12.569550] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.531 [2024-04-26 21:32:12.569559] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.531 [2024-04-26 21:32:12.569563] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569567] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.531 [2024-04-26 21:32:12.569578] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569581] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.531 [2024-04-26 21:32:12.569583] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.531 [2024-04-26 21:32:12.569588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.569602] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.569672] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.569680] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.569682] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.569685] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.569693] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.569696] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.569698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.569703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.569715] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.569800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.569809] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.569812] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.569815] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.569823] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.569826] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.569829] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.569834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.569847] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.569926] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.569934] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.569937] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.569939] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.569948] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.569951] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.569953] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.569959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.569971] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.570047] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.570055] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.570058] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570061] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.570069] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570073] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570075] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.570081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.570093] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.570169] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.570178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.570180] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570183] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.570192] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570195] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570197] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.570203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.570215] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.570287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.570295] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.570298] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570301] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.570309] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570312] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570315] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.570320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.570344] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.570408] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.570416] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.570418] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570421] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.570430] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570433] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570436] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.570441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.570454] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.570529] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.570538] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.570541] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570543] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.570552] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570557] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570561] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.570569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.570586] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.570645] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.570654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.570657] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570660] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.570668] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570672] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570674] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.570680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.570693] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.570767] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.570776] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.570778] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570781] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.570790] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570793] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570795] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.570801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.570813] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.570882] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.570890] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.570893] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570896] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.570904] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570907] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.570910] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.570915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.532 [2024-04-26 21:32:12.570927] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.532 [2024-04-26 21:32:12.571003] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.532 [2024-04-26 21:32:12.571011] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.532 [2024-04-26 21:32:12.571014] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.571017] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.532 [2024-04-26 21:32:12.571025] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.571028] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.532 [2024-04-26 21:32:12.571031] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.532 [2024-04-26 21:32:12.571036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.571048] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.571119] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.571127] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.571130] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571133] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.571141] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571144] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571147] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.571153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.571165] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.571254] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.571262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.571264] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571267] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.571274] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571277] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571280] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.571285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.571297] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.571370] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.571378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.571380] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571383] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.571391] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571394] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571397] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.571402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.571414] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.571474] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.571482] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.571484] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571487] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.571495] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571498] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571500] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.571506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.571517] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.571584] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.571592] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.571594] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571597] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.571605] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571608] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571611] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.571616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.571627] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.571691] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.571699] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.571702] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571705] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.571713] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571715] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571718] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.571723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.571734] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.571804] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.571811] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.571814] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571817] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.571825] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571828] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571830] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.571835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.571847] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.571915] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.571923] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.571925] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571928] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.571936] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571939] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.571941] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.571946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.571958] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.572020] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.572028] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.572031] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572033] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.572041] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572044] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572047] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.572052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.572063] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.572127] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.572135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.572137] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572140] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.572148] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572151] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572153] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.572158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.572170] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.572241] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.572248] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.572251] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572254] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.572261] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572264] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572267] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.572272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.533 [2024-04-26 21:32:12.572283] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.533 [2024-04-26 21:32:12.572359] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.533 [2024-04-26 21:32:12.572367] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.533 [2024-04-26 21:32:12.572370] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572372] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.533 [2024-04-26 21:32:12.572380] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572383] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.533 [2024-04-26 21:32:12.572386] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.533 [2024-04-26 21:32:12.572391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.534 [2024-04-26 21:32:12.572403] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.534 [2024-04-26 21:32:12.572464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.534 [2024-04-26 21:32:12.572471] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.534 [2024-04-26 21:32:12.572474] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572477] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.534 [2024-04-26 21:32:12.572485] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572488] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572490] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.534 [2024-04-26 21:32:12.572495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.534 [2024-04-26 21:32:12.572507] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.534 [2024-04-26 21:32:12.572568] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.534 [2024-04-26 21:32:12.572578] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.534 [2024-04-26 21:32:12.572582] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572585] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.534 [2024-04-26 21:32:12.572596] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572600] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572604] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.534 [2024-04-26 21:32:12.572612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.534 [2024-04-26 21:32:12.572631] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.534 [2024-04-26 21:32:12.572685] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.534 [2024-04-26 21:32:12.572697] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.534 [2024-04-26 21:32:12.572701] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572705] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.534 [2024-04-26 21:32:12.572716] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572721] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572724] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.534 [2024-04-26 21:32:12.572732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.534 [2024-04-26 21:32:12.572752] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.534 [2024-04-26 21:32:12.572809] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.534 [2024-04-26 21:32:12.572821] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.534 [2024-04-26 21:32:12.572825] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572829] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.534 [2024-04-26 21:32:12.572840] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572844] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572848] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.534 [2024-04-26 21:32:12.572855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.534 [2024-04-26 21:32:12.572874] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.534 [2024-04-26 21:32:12.572941] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.534 [2024-04-26 21:32:12.572952] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.534 [2024-04-26 21:32:12.572957] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572962] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.534 [2024-04-26 21:32:12.572973] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572978] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.572983] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.534 [2024-04-26 21:32:12.572991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.534 [2024-04-26 21:32:12.573010] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.534 [2024-04-26 21:32:12.573071] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.534 [2024-04-26 21:32:12.573082] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.534 [2024-04-26 21:32:12.573087] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.573091] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.534 [2024-04-26 21:32:12.573102] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.573107] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.573111] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.534 [2024-04-26 21:32:12.573120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.534 [2024-04-26 21:32:12.573139] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.534 [2024-04-26 21:32:12.573191] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.534 [2024-04-26 21:32:12.573202] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.534 [2024-04-26 21:32:12.573207] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.573212] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.534 [2024-04-26 21:32:12.573223] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.573227] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.573231] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.534 [2024-04-26 21:32:12.573239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.534 [2024-04-26 21:32:12.573258] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.534 [2024-04-26 21:32:12.573321] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.534 [2024-04-26 21:32:12.577350] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.534 [2024-04-26 21:32:12.577370] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.577375] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.534 [2024-04-26 21:32:12.577391] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.577395] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.577398] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18e4320) 00:29:23.534 [2024-04-26 21:32:12.577405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.534 [2024-04-26 21:32:12.577432] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x192da10, cid 3, qid 0 00:29:23.534 [2024-04-26 21:32:12.577489] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:23.534 [2024-04-26 21:32:12.577498] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:23.534 [2024-04-26 21:32:12.577501] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:23.534 [2024-04-26 21:32:12.577504] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x192da10) on tqpair=0x18e4320 00:29:23.534 [2024-04-26 21:32:12.577511] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:29:23.534 0 Kelvin (-273 Celsius) 00:29:23.534 Available Spare: 0% 00:29:23.534 Available Spare Threshold: 0% 00:29:23.534 Life Percentage Used: 0% 00:29:23.534 Data Units Read: 0 00:29:23.534 Data Units Written: 0 00:29:23.534 Host Read Commands: 0 00:29:23.534 Host Write Commands: 0 00:29:23.534 Controller Busy Time: 0 minutes 00:29:23.534 Power Cycles: 0 00:29:23.534 Power On Hours: 0 hours 00:29:23.534 Unsafe Shutdowns: 0 00:29:23.534 Unrecoverable Media Errors: 0 00:29:23.534 Lifetime Error Log Entries: 0 00:29:23.534 Warning Temperature Time: 0 minutes 00:29:23.534 Critical Temperature Time: 0 minutes 00:29:23.534 00:29:23.534 Number of Queues 00:29:23.534 ================ 00:29:23.534 Number of I/O Submission Queues: 127 00:29:23.534 Number of I/O Completion Queues: 127 00:29:23.534 00:29:23.534 Active Namespaces 00:29:23.534 ================= 00:29:23.534 Namespace ID:1 00:29:23.534 Error Recovery Timeout: Unlimited 00:29:23.534 Command Set Identifier: NVM (00h) 00:29:23.534 Deallocate: Supported 00:29:23.534 Deallocated/Unwritten Error: Not Supported 00:29:23.534 Deallocated Read Value: Unknown 00:29:23.534 Deallocate in Write Zeroes: Not Supported 00:29:23.534 Deallocated Guard Field: 0xFFFF 00:29:23.534 Flush: Supported 00:29:23.534 Reservation: Supported 00:29:23.534 Namespace Sharing Capabilities: Multiple Controllers 00:29:23.534 Size (in LBAs): 131072 (0GiB) 00:29:23.534 Capacity (in LBAs): 131072 (0GiB) 00:29:23.534 Utilization (in LBAs): 131072 (0GiB) 00:29:23.534 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:23.534 EUI64: ABCDEF0123456789 00:29:23.534 UUID: 53a01878-ce1c-4bbd-9257-6a0631536c46 00:29:23.534 Thin Provisioning: Not Supported 00:29:23.534 Per-NS Atomic Units: Yes 00:29:23.534 Atomic Boundary Size (Normal): 0 00:29:23.534 Atomic Boundary Size (PFail): 0 00:29:23.534 Atomic Boundary Offset: 0 00:29:23.534 Maximum Single Source Range Length: 65535 00:29:23.534 Maximum Copy Length: 65535 00:29:23.534 Maximum Source Range Count: 1 00:29:23.534 NGUID/EUI64 Never Reused: No 00:29:23.534 Namespace Write Protected: No 00:29:23.534 Number of LBA Formats: 1 00:29:23.534 Current LBA Format: LBA Format #00 00:29:23.534 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:23.534 00:29:23.534 21:32:12 -- host/identify.sh@51 -- # sync 00:29:23.534 21:32:12 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:23.535 21:32:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.535 21:32:12 -- common/autotest_common.sh@10 -- # set +x 00:29:23.535 21:32:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.535 21:32:12 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:23.535 21:32:12 -- host/identify.sh@56 -- # nvmftestfini 00:29:23.535 21:32:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:23.535 21:32:12 -- nvmf/common.sh@117 -- # sync 00:29:23.535 21:32:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:23.535 21:32:12 -- nvmf/common.sh@120 -- # set +e 00:29:23.535 21:32:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:23.535 21:32:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:23.535 rmmod nvme_tcp 00:29:23.535 rmmod nvme_fabrics 00:29:23.535 rmmod nvme_keyring 00:29:23.535 21:32:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:23.535 21:32:12 -- nvmf/common.sh@124 -- # set -e 00:29:23.535 21:32:12 -- nvmf/common.sh@125 -- # return 0 00:29:23.535 21:32:12 -- nvmf/common.sh@478 -- # '[' -n 98746 ']' 00:29:23.535 21:32:12 -- nvmf/common.sh@479 -- # killprocess 98746 00:29:23.535 21:32:12 -- common/autotest_common.sh@936 -- # '[' -z 98746 ']' 00:29:23.535 21:32:12 -- common/autotest_common.sh@940 -- # kill -0 98746 00:29:23.535 21:32:12 -- common/autotest_common.sh@941 -- # uname 00:29:23.535 21:32:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:23.535 21:32:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98746 00:29:23.535 21:32:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:23.535 21:32:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:23.535 killing process with pid 98746 00:29:23.535 21:32:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98746' 00:29:23.535 21:32:12 -- common/autotest_common.sh@955 -- # kill 98746 00:29:23.535 [2024-04-26 21:32:12.746980] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:23.535 21:32:12 -- common/autotest_common.sh@960 -- # wait 98746 00:29:23.795 21:32:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:23.795 21:32:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:23.795 21:32:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:23.795 21:32:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.795 21:32:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:23.795 21:32:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.795 21:32:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.795 21:32:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.795 21:32:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:23.795 00:29:23.795 real 0m2.620s 00:29:23.795 user 0m7.154s 00:29:23.795 sys 0m0.714s 00:29:23.795 ************************************ 00:29:23.795 END TEST nvmf_identify 00:29:23.795 ************************************ 00:29:23.795 21:32:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:23.795 21:32:13 -- common/autotest_common.sh@10 -- # set +x 00:29:24.054 21:32:13 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:24.054 21:32:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:24.054 21:32:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:24.054 21:32:13 -- common/autotest_common.sh@10 -- # set +x 00:29:24.054 ************************************ 00:29:24.054 START TEST nvmf_perf 00:29:24.054 ************************************ 00:29:24.054 21:32:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:24.054 * Looking for test storage... 00:29:24.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:24.054 21:32:13 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:24.054 21:32:13 -- nvmf/common.sh@7 -- # uname -s 00:29:24.054 21:32:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.054 21:32:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.054 21:32:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.054 21:32:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.054 21:32:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.054 21:32:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.054 21:32:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.054 21:32:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.054 21:32:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.054 21:32:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.313 21:32:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:29:24.313 21:32:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:29:24.313 21:32:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.313 21:32:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.313 21:32:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:24.313 21:32:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.313 21:32:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:24.313 21:32:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.313 21:32:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.313 21:32:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.313 21:32:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.313 21:32:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.313 21:32:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.313 21:32:13 -- paths/export.sh@5 -- # export PATH 00:29:24.313 21:32:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.313 21:32:13 -- nvmf/common.sh@47 -- # : 0 00:29:24.313 21:32:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:24.313 21:32:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:24.313 21:32:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.313 21:32:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.313 21:32:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.313 21:32:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:24.313 21:32:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:24.313 21:32:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:24.313 21:32:13 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:24.313 21:32:13 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:24.313 21:32:13 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:24.313 21:32:13 -- host/perf.sh@17 -- # nvmftestinit 00:29:24.313 21:32:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:24.314 21:32:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.314 21:32:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:24.314 21:32:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:24.314 21:32:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:24.314 21:32:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.314 21:32:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:24.314 21:32:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.314 21:32:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:24.314 21:32:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:24.314 21:32:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:24.314 21:32:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:24.314 21:32:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:24.314 21:32:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:24.314 21:32:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.314 21:32:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.314 21:32:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:24.314 21:32:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:24.314 21:32:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:24.314 21:32:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:24.314 21:32:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:24.314 21:32:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.314 21:32:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:24.314 21:32:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:24.314 21:32:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:24.314 21:32:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:24.314 21:32:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:24.314 21:32:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:24.314 Cannot find device "nvmf_tgt_br" 00:29:24.314 21:32:13 -- nvmf/common.sh@155 -- # true 00:29:24.314 21:32:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:24.314 Cannot find device "nvmf_tgt_br2" 00:29:24.314 21:32:13 -- nvmf/common.sh@156 -- # true 00:29:24.314 21:32:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:24.314 21:32:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:24.314 Cannot find device "nvmf_tgt_br" 00:29:24.314 21:32:13 -- nvmf/common.sh@158 -- # true 00:29:24.314 21:32:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:24.314 Cannot find device "nvmf_tgt_br2" 00:29:24.314 21:32:13 -- nvmf/common.sh@159 -- # true 00:29:24.314 21:32:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:24.314 21:32:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:24.314 21:32:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:24.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:24.314 21:32:13 -- nvmf/common.sh@162 -- # true 00:29:24.314 21:32:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:24.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:24.314 21:32:13 -- nvmf/common.sh@163 -- # true 00:29:24.314 21:32:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:24.314 21:32:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:24.314 21:32:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:24.314 21:32:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:24.314 21:32:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:24.314 21:32:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:24.573 21:32:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:24.573 21:32:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:24.573 21:32:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:24.573 21:32:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:24.573 21:32:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:24.573 21:32:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:24.573 21:32:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:24.573 21:32:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:24.573 21:32:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:24.573 21:32:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:24.573 21:32:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:24.573 21:32:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:24.573 21:32:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:24.573 21:32:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:24.573 21:32:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:24.573 21:32:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:24.573 21:32:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:24.573 21:32:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:24.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:29:24.573 00:29:24.573 --- 10.0.0.2 ping statistics --- 00:29:24.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.573 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:29:24.573 21:32:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:24.573 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:24.573 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:29:24.573 00:29:24.573 --- 10.0.0.3 ping statistics --- 00:29:24.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.573 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:29:24.573 21:32:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:24.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:29:24.573 00:29:24.573 --- 10.0.0.1 ping statistics --- 00:29:24.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.573 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:29:24.573 21:32:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.573 21:32:13 -- nvmf/common.sh@422 -- # return 0 00:29:24.573 21:32:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:24.573 21:32:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.573 21:32:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:24.573 21:32:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:24.573 21:32:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.573 21:32:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:24.573 21:32:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:24.573 21:32:13 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:24.573 21:32:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:24.573 21:32:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:24.573 21:32:13 -- common/autotest_common.sh@10 -- # set +x 00:29:24.573 21:32:13 -- nvmf/common.sh@470 -- # nvmfpid=98976 00:29:24.573 21:32:13 -- nvmf/common.sh@471 -- # waitforlisten 98976 00:29:24.573 21:32:13 -- common/autotest_common.sh@817 -- # '[' -z 98976 ']' 00:29:24.573 21:32:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.573 21:32:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:24.573 21:32:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.573 21:32:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:24.573 21:32:13 -- common/autotest_common.sh@10 -- # set +x 00:29:24.573 21:32:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:24.573 [2024-04-26 21:32:13.784064] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:24.573 [2024-04-26 21:32:13.784177] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.832 [2024-04-26 21:32:13.932925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.832 [2024-04-26 21:32:13.985479] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.832 [2024-04-26 21:32:13.985533] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.832 [2024-04-26 21:32:13.985539] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.832 [2024-04-26 21:32:13.985544] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.832 [2024-04-26 21:32:13.985549] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.832 [2024-04-26 21:32:13.985732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.832 [2024-04-26 21:32:13.986874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.832 [2024-04-26 21:32:13.986976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.832 [2024-04-26 21:32:13.986978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.806 21:32:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:25.806 21:32:14 -- common/autotest_common.sh@850 -- # return 0 00:29:25.806 21:32:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:25.806 21:32:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:25.806 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:29:25.806 21:32:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.806 21:32:14 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:25.806 21:32:14 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:29:26.074 21:32:15 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:29:26.074 21:32:15 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:26.333 21:32:15 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:29:26.333 21:32:15 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:26.593 21:32:15 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:26.593 21:32:15 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:29:26.593 21:32:15 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:26.593 21:32:15 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:26.593 21:32:15 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:26.593 [2024-04-26 21:32:15.801142] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.593 21:32:15 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.851 21:32:16 -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:26.851 21:32:16 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:27.110 21:32:16 -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:27.110 21:32:16 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:27.368 21:32:16 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.626 [2024-04-26 21:32:16.668660] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.626 21:32:16 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:27.886 21:32:16 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:27.886 21:32:16 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:27.886 21:32:16 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:27.886 21:32:16 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:28.825 Initializing NVMe Controllers 00:29:28.825 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:28.825 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:29:28.825 Initialization complete. Launching workers. 00:29:28.825 ======================================================== 00:29:28.825 Latency(us) 00:29:28.825 Device Information : IOPS MiB/s Average min max 00:29:28.825 PCIE (0000:00:10.0) NSID 1 from core 0: 20416.00 79.75 1567.20 264.82 7707.73 00:29:28.825 ======================================================== 00:29:28.825 Total : 20416.00 79.75 1567.20 264.82 7707.73 00:29:28.825 00:29:28.825 21:32:17 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:30.202 Initializing NVMe Controllers 00:29:30.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:30.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:30.202 Initialization complete. Launching workers. 00:29:30.202 ======================================================== 00:29:30.202 Latency(us) 00:29:30.202 Device Information : IOPS MiB/s Average min max 00:29:30.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4527.00 17.68 220.65 78.70 7156.97 00:29:30.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8185.95 5958.19 12048.58 00:29:30.202 ======================================================== 00:29:30.202 Total : 4650.00 18.16 431.35 78.70 12048.58 00:29:30.202 00:29:30.202 21:32:19 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:31.585 [2024-04-26 21:32:20.608387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1001a70 is same with the state(5) to be set 00:29:31.585 Initializing NVMe Controllers 00:29:31.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:31.585 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:31.585 Initialization complete. Launching workers. 00:29:31.585 ======================================================== 00:29:31.585 Latency(us) 00:29:31.585 Device Information : IOPS MiB/s Average min max 00:29:31.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9316.99 36.39 3435.46 597.18 7256.56 00:29:31.585 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2712.00 10.59 11874.69 4965.03 20100.67 00:29:31.585 ======================================================== 00:29:31.585 Total : 12028.99 46.99 5338.13 597.18 20100.67 00:29:31.585 00:29:31.585 21:32:20 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:29:31.585 21:32:20 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:34.120 Initializing NVMe Controllers 00:29:34.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.120 Controller IO queue size 128, less than required. 00:29:34.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.120 Controller IO queue size 128, less than required. 00:29:34.120 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:34.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:34.120 Initialization complete. Launching workers. 00:29:34.120 ======================================================== 00:29:34.120 Latency(us) 00:29:34.120 Device Information : IOPS MiB/s Average min max 00:29:34.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1721.48 430.37 75545.23 51156.10 156609.77 00:29:34.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.99 142.50 237483.86 65368.25 361434.30 00:29:34.120 ======================================================== 00:29:34.120 Total : 2291.47 572.87 115826.71 51156.10 361434.30 00:29:34.120 00:29:34.120 21:32:23 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:34.377 No valid NVMe controllers or AIO or URING devices found 00:29:34.377 Initializing NVMe Controllers 00:29:34.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:34.377 Controller IO queue size 128, less than required. 00:29:34.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.378 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:34.378 Controller IO queue size 128, less than required. 00:29:34.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:34.378 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:29:34.378 WARNING: Some requested NVMe devices were skipped 00:29:34.378 21:32:23 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:36.913 Initializing NVMe Controllers 00:29:36.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.913 Controller IO queue size 128, less than required. 00:29:36.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.913 Controller IO queue size 128, less than required. 00:29:36.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:36.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:36.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:36.913 Initialization complete. Launching workers. 00:29:36.913 00:29:36.913 ==================== 00:29:36.913 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:36.913 TCP transport: 00:29:36.913 polls: 10101 00:29:36.913 idle_polls: 6928 00:29:36.913 sock_completions: 3173 00:29:36.913 nvme_completions: 6373 00:29:36.913 submitted_requests: 9596 00:29:36.913 queued_requests: 1 00:29:36.913 00:29:36.913 ==================== 00:29:36.913 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:36.913 TCP transport: 00:29:36.913 polls: 13221 00:29:36.913 idle_polls: 9897 00:29:36.913 sock_completions: 3324 00:29:36.913 nvme_completions: 6469 00:29:36.913 submitted_requests: 9648 00:29:36.913 queued_requests: 1 00:29:36.913 ======================================================== 00:29:36.913 Latency(us) 00:29:36.913 Device Information : IOPS MiB/s Average min max 00:29:36.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1592.98 398.25 82606.17 44375.53 139422.43 00:29:36.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1616.98 404.25 79931.72 43498.09 118170.11 00:29:36.913 ======================================================== 00:29:36.913 Total : 3209.97 802.49 81258.95 43498.09 139422.43 00:29:36.913 00:29:36.913 21:32:26 -- host/perf.sh@66 -- # sync 00:29:36.913 21:32:26 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.171 21:32:26 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:37.171 21:32:26 -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:29:37.171 21:32:26 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:37.430 21:32:26 -- host/perf.sh@72 -- # ls_guid=a2416424-09c9-42c4-b8af-9b8ff7eb31d5 00:29:37.430 21:32:26 -- host/perf.sh@73 -- # get_lvs_free_mb a2416424-09c9-42c4-b8af-9b8ff7eb31d5 00:29:37.430 21:32:26 -- common/autotest_common.sh@1350 -- # local lvs_uuid=a2416424-09c9-42c4-b8af-9b8ff7eb31d5 00:29:37.430 21:32:26 -- common/autotest_common.sh@1351 -- # local lvs_info 00:29:37.430 21:32:26 -- common/autotest_common.sh@1352 -- # local fc 00:29:37.430 21:32:26 -- common/autotest_common.sh@1353 -- # local cs 00:29:37.430 21:32:26 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:37.689 21:32:26 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:29:37.689 { 00:29:37.689 "base_bdev": "Nvme0n1", 00:29:37.689 "block_size": 4096, 00:29:37.689 "cluster_size": 4194304, 00:29:37.689 "free_clusters": 1278, 00:29:37.689 "name": "lvs_0", 00:29:37.689 "total_data_clusters": 1278, 00:29:37.689 "uuid": "a2416424-09c9-42c4-b8af-9b8ff7eb31d5" 00:29:37.689 } 00:29:37.689 ]' 00:29:37.689 21:32:26 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="a2416424-09c9-42c4-b8af-9b8ff7eb31d5") .free_clusters' 00:29:37.689 21:32:26 -- common/autotest_common.sh@1355 -- # fc=1278 00:29:37.689 21:32:26 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="a2416424-09c9-42c4-b8af-9b8ff7eb31d5") .cluster_size' 00:29:37.689 5112 00:29:37.689 21:32:26 -- common/autotest_common.sh@1356 -- # cs=4194304 00:29:37.689 21:32:26 -- common/autotest_common.sh@1359 -- # free_mb=5112 00:29:37.689 21:32:26 -- common/autotest_common.sh@1360 -- # echo 5112 00:29:37.689 21:32:26 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:29:37.689 21:32:26 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a2416424-09c9-42c4-b8af-9b8ff7eb31d5 lbd_0 5112 00:29:37.949 21:32:27 -- host/perf.sh@80 -- # lb_guid=17998a9e-d55a-4920-8120-55d4bcfe9d70 00:29:37.949 21:32:27 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 17998a9e-d55a-4920-8120-55d4bcfe9d70 lvs_n_0 00:29:38.208 21:32:27 -- host/perf.sh@83 -- # ls_nested_guid=cdde0080-467a-4718-b58c-54dbcc9fd0e3 00:29:38.208 21:32:27 -- host/perf.sh@84 -- # get_lvs_free_mb cdde0080-467a-4718-b58c-54dbcc9fd0e3 00:29:38.208 21:32:27 -- common/autotest_common.sh@1350 -- # local lvs_uuid=cdde0080-467a-4718-b58c-54dbcc9fd0e3 00:29:38.208 21:32:27 -- common/autotest_common.sh@1351 -- # local lvs_info 00:29:38.208 21:32:27 -- common/autotest_common.sh@1352 -- # local fc 00:29:38.208 21:32:27 -- common/autotest_common.sh@1353 -- # local cs 00:29:38.208 21:32:27 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:38.467 21:32:27 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:29:38.467 { 00:29:38.467 "base_bdev": "Nvme0n1", 00:29:38.467 "block_size": 4096, 00:29:38.467 "cluster_size": 4194304, 00:29:38.467 "free_clusters": 0, 00:29:38.467 "name": "lvs_0", 00:29:38.467 "total_data_clusters": 1278, 00:29:38.467 "uuid": "a2416424-09c9-42c4-b8af-9b8ff7eb31d5" 00:29:38.467 }, 00:29:38.467 { 00:29:38.467 "base_bdev": "17998a9e-d55a-4920-8120-55d4bcfe9d70", 00:29:38.467 "block_size": 4096, 00:29:38.467 "cluster_size": 4194304, 00:29:38.467 "free_clusters": 1276, 00:29:38.467 "name": "lvs_n_0", 00:29:38.467 "total_data_clusters": 1276, 00:29:38.467 "uuid": "cdde0080-467a-4718-b58c-54dbcc9fd0e3" 00:29:38.467 } 00:29:38.467 ]' 00:29:38.467 21:32:27 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="cdde0080-467a-4718-b58c-54dbcc9fd0e3") .free_clusters' 00:29:38.467 21:32:27 -- common/autotest_common.sh@1355 -- # fc=1276 00:29:38.467 21:32:27 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="cdde0080-467a-4718-b58c-54dbcc9fd0e3") .cluster_size' 00:29:38.726 21:32:27 -- common/autotest_common.sh@1356 -- # cs=4194304 00:29:38.726 21:32:27 -- common/autotest_common.sh@1359 -- # free_mb=5104 00:29:38.726 21:32:27 -- common/autotest_common.sh@1360 -- # echo 5104 00:29:38.726 5104 00:29:38.726 21:32:27 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:29:38.726 21:32:27 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cdde0080-467a-4718-b58c-54dbcc9fd0e3 lbd_nest_0 5104 00:29:38.726 21:32:27 -- host/perf.sh@88 -- # lb_nested_guid=b99531ee-f4f0-4f84-9438-7847848fb6b0 00:29:38.726 21:32:27 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.986 21:32:28 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:38.986 21:32:28 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b99531ee-f4f0-4f84-9438-7847848fb6b0 00:29:39.245 21:32:28 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.508 21:32:28 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:39.508 21:32:28 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:39.508 21:32:28 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:39.508 21:32:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:39.508 21:32:28 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.767 No valid NVMe controllers or AIO or URING devices found 00:29:39.768 Initializing NVMe Controllers 00:29:39.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.768 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:29:39.768 WARNING: Some requested NVMe devices were skipped 00:29:39.768 21:32:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:39.768 21:32:28 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:51.992 Initializing NVMe Controllers 00:29:51.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:51.992 Initialization complete. Launching workers. 00:29:51.992 ======================================================== 00:29:51.992 Latency(us) 00:29:51.992 Device Information : IOPS MiB/s Average min max 00:29:51.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1091.20 136.40 916.15 276.86 7691.47 00:29:51.992 ======================================================== 00:29:51.992 Total : 1091.20 136.40 916.15 276.86 7691.47 00:29:51.992 00:29:51.992 21:32:39 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:51.992 21:32:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:51.992 21:32:39 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:51.992 No valid NVMe controllers or AIO or URING devices found 00:29:51.992 Initializing NVMe Controllers 00:29:51.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.992 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:29:51.992 WARNING: Some requested NVMe devices were skipped 00:29:51.992 21:32:39 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:51.992 21:32:39 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.966 [2024-04-26 21:32:49.686373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185fe0 is same with the state(5) to be set 00:30:01.966 [2024-04-26 21:32:49.686437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185fe0 is same with the state(5) to be set 00:30:01.966 [2024-04-26 21:32:49.686445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185fe0 is same with the state(5) to be set 00:30:01.966 [2024-04-26 21:32:49.686450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185fe0 is same with the state(5) to be set 00:30:01.966 [2024-04-26 21:32:49.686457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185fe0 is same with the state(5) to be set 00:30:01.966 [2024-04-26 21:32:49.686462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1185fe0 is same with the state(5) to be set 00:30:01.966 Initializing NVMe Controllers 00:30:01.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:01.966 Initialization complete. Launching workers. 00:30:01.966 ======================================================== 00:30:01.966 Latency(us) 00:30:01.966 Device Information : IOPS MiB/s Average min max 00:30:01.966 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1091.92 136.49 29321.21 8093.84 243363.83 00:30:01.966 ======================================================== 00:30:01.966 Total : 1091.92 136.49 29321.21 8093.84 243363.83 00:30:01.966 00:30:01.966 21:32:49 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:01.966 21:32:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:01.966 21:32:49 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:01.966 No valid NVMe controllers or AIO or URING devices found 00:30:01.966 Initializing NVMe Controllers 00:30:01.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.966 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:30:01.966 WARNING: Some requested NVMe devices were skipped 00:30:01.966 21:32:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:01.966 21:32:50 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:11.969 Initializing NVMe Controllers 00:30:11.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.969 Controller IO queue size 128, less than required. 00:30:11.969 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:11.969 Initialization complete. Launching workers. 00:30:11.969 ======================================================== 00:30:11.969 Latency(us) 00:30:11.969 Device Information : IOPS MiB/s Average min max 00:30:11.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4311.40 538.92 29744.42 4676.80 168631.34 00:30:11.969 ======================================================== 00:30:11.969 Total : 4311.40 538.92 29744.42 4676.80 168631.34 00:30:11.969 00:30:11.969 21:33:00 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.969 21:33:00 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b99531ee-f4f0-4f84-9438-7847848fb6b0 00:30:11.969 21:33:00 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:11.969 21:33:01 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 17998a9e-d55a-4920-8120-55d4bcfe9d70 00:30:12.228 21:33:01 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:12.486 21:33:01 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:12.486 21:33:01 -- host/perf.sh@114 -- # nvmftestfini 00:30:12.486 21:33:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:12.486 21:33:01 -- nvmf/common.sh@117 -- # sync 00:30:12.486 21:33:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:12.486 21:33:01 -- nvmf/common.sh@120 -- # set +e 00:30:12.486 21:33:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:12.486 21:33:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:12.486 rmmod nvme_tcp 00:30:12.486 rmmod nvme_fabrics 00:30:12.486 rmmod nvme_keyring 00:30:12.486 21:33:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:12.486 21:33:01 -- nvmf/common.sh@124 -- # set -e 00:30:12.486 21:33:01 -- nvmf/common.sh@125 -- # return 0 00:30:12.486 21:33:01 -- nvmf/common.sh@478 -- # '[' -n 98976 ']' 00:30:12.486 21:33:01 -- nvmf/common.sh@479 -- # killprocess 98976 00:30:12.486 21:33:01 -- common/autotest_common.sh@936 -- # '[' -z 98976 ']' 00:30:12.486 21:33:01 -- common/autotest_common.sh@940 -- # kill -0 98976 00:30:12.486 21:33:01 -- common/autotest_common.sh@941 -- # uname 00:30:12.486 21:33:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:12.486 21:33:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98976 00:30:12.486 killing process with pid 98976 00:30:12.486 21:33:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:12.486 21:33:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:12.486 21:33:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98976' 00:30:12.486 21:33:01 -- common/autotest_common.sh@955 -- # kill 98976 00:30:12.486 21:33:01 -- common/autotest_common.sh@960 -- # wait 98976 00:30:12.744 21:33:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:12.744 21:33:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:12.744 21:33:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:12.744 21:33:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:12.744 21:33:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:12.744 21:33:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.744 21:33:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:12.744 21:33:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.744 21:33:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:12.744 ************************************ 00:30:12.744 END TEST nvmf_perf 00:30:12.744 ************************************ 00:30:12.745 00:30:12.745 real 0m48.800s 00:30:12.745 user 3m4.463s 00:30:12.745 sys 0m9.841s 00:30:12.745 21:33:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:12.745 21:33:01 -- common/autotest_common.sh@10 -- # set +x 00:30:13.003 21:33:02 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:13.004 21:33:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:13.004 21:33:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:13.004 21:33:02 -- common/autotest_common.sh@10 -- # set +x 00:30:13.004 ************************************ 00:30:13.004 START TEST nvmf_fio_host 00:30:13.004 ************************************ 00:30:13.004 21:33:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:13.004 * Looking for test storage... 00:30:13.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:13.004 21:33:02 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:13.004 21:33:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.004 21:33:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.004 21:33:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.004 21:33:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.004 21:33:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.004 21:33:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.004 21:33:02 -- paths/export.sh@5 -- # export PATH 00:30:13.004 21:33:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.004 21:33:02 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:13.004 21:33:02 -- nvmf/common.sh@7 -- # uname -s 00:30:13.004 21:33:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.004 21:33:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.004 21:33:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.004 21:33:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.004 21:33:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.004 21:33:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.004 21:33:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.004 21:33:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.004 21:33:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.004 21:33:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.263 21:33:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:30:13.263 21:33:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:30:13.263 21:33:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.263 21:33:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.263 21:33:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:13.263 21:33:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.263 21:33:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:13.263 21:33:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.263 21:33:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.263 21:33:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.263 21:33:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.263 21:33:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.263 21:33:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.263 21:33:02 -- paths/export.sh@5 -- # export PATH 00:30:13.263 21:33:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.263 21:33:02 -- nvmf/common.sh@47 -- # : 0 00:30:13.263 21:33:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:13.263 21:33:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:13.263 21:33:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.263 21:33:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.263 21:33:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.263 21:33:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:13.263 21:33:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:13.263 21:33:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:13.263 21:33:02 -- host/fio.sh@12 -- # nvmftestinit 00:30:13.263 21:33:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:13.263 21:33:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.263 21:33:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:13.263 21:33:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:13.263 21:33:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:13.263 21:33:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.263 21:33:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.263 21:33:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.263 21:33:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:13.263 21:33:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:13.263 21:33:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:13.263 21:33:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:13.263 21:33:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:13.263 21:33:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:13.263 21:33:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.263 21:33:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.263 21:33:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:13.263 21:33:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:13.263 21:33:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:13.263 21:33:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:13.263 21:33:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:13.263 21:33:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.263 21:33:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:13.263 21:33:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:13.263 21:33:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:13.263 21:33:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:13.263 21:33:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:13.263 21:33:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:13.263 Cannot find device "nvmf_tgt_br" 00:30:13.263 21:33:02 -- nvmf/common.sh@155 -- # true 00:30:13.263 21:33:02 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:13.263 Cannot find device "nvmf_tgt_br2" 00:30:13.263 21:33:02 -- nvmf/common.sh@156 -- # true 00:30:13.263 21:33:02 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:13.263 21:33:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:13.263 Cannot find device "nvmf_tgt_br" 00:30:13.263 21:33:02 -- nvmf/common.sh@158 -- # true 00:30:13.263 21:33:02 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:13.263 Cannot find device "nvmf_tgt_br2" 00:30:13.263 21:33:02 -- nvmf/common.sh@159 -- # true 00:30:13.263 21:33:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:13.263 21:33:02 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:13.263 21:33:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:13.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:13.263 21:33:02 -- nvmf/common.sh@162 -- # true 00:30:13.263 21:33:02 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:13.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:13.263 21:33:02 -- nvmf/common.sh@163 -- # true 00:30:13.263 21:33:02 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:13.263 21:33:02 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:13.263 21:33:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:13.263 21:33:02 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:13.263 21:33:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:13.263 21:33:02 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:13.263 21:33:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:13.263 21:33:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:13.521 21:33:02 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:13.521 21:33:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:13.521 21:33:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:13.521 21:33:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:13.521 21:33:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:13.521 21:33:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:13.521 21:33:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:13.521 21:33:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:13.521 21:33:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:13.521 21:33:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:13.521 21:33:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:13.521 21:33:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:13.521 21:33:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:13.521 21:33:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:13.521 21:33:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:13.521 21:33:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:13.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:30:13.521 00:30:13.521 --- 10.0.0.2 ping statistics --- 00:30:13.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.521 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:30:13.521 21:33:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:13.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:13.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:30:13.521 00:30:13.521 --- 10.0.0.3 ping statistics --- 00:30:13.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.521 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:30:13.521 21:33:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:13.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:30:13.521 00:30:13.521 --- 10.0.0.1 ping statistics --- 00:30:13.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.521 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:30:13.521 21:33:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.521 21:33:02 -- nvmf/common.sh@422 -- # return 0 00:30:13.521 21:33:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:13.521 21:33:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.521 21:33:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:13.521 21:33:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:13.521 21:33:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.521 21:33:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:13.521 21:33:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:13.521 21:33:02 -- host/fio.sh@14 -- # [[ y != y ]] 00:30:13.521 21:33:02 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:30:13.521 21:33:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:13.521 21:33:02 -- common/autotest_common.sh@10 -- # set +x 00:30:13.521 21:33:02 -- host/fio.sh@22 -- # nvmfpid=99932 00:30:13.521 21:33:02 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:13.521 21:33:02 -- host/fio.sh@26 -- # waitforlisten 99932 00:30:13.521 21:33:02 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:13.521 21:33:02 -- common/autotest_common.sh@817 -- # '[' -z 99932 ']' 00:30:13.521 21:33:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.521 21:33:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:13.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.521 21:33:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.521 21:33:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:13.521 21:33:02 -- common/autotest_common.sh@10 -- # set +x 00:30:13.521 [2024-04-26 21:33:02.686565] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:30:13.521 [2024-04-26 21:33:02.686637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.780 [2024-04-26 21:33:02.831637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.780 [2024-04-26 21:33:02.879081] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.780 [2024-04-26 21:33:02.879133] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.780 [2024-04-26 21:33:02.879140] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:13.780 [2024-04-26 21:33:02.879145] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:13.780 [2024-04-26 21:33:02.879149] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.780 [2024-04-26 21:33:02.879404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.781 [2024-04-26 21:33:02.879575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:13.781 [2024-04-26 21:33:02.879649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:13.781 [2024-04-26 21:33:02.879651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.346 21:33:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:14.346 21:33:03 -- common/autotest_common.sh@850 -- # return 0 00:30:14.346 21:33:03 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.346 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.346 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.346 [2024-04-26 21:33:03.551090] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.346 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.346 21:33:03 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:30:14.346 21:33:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:14.346 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.606 21:33:03 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:14.606 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.606 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.606 Malloc1 00:30:14.606 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.606 21:33:03 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.606 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.606 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.606 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.606 21:33:03 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:14.606 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.606 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.606 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.606 21:33:03 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.606 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.606 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.606 [2024-04-26 21:33:03.674322] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.606 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.606 21:33:03 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:14.606 21:33:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:14.606 21:33:03 -- common/autotest_common.sh@10 -- # set +x 00:30:14.606 21:33:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:14.606 21:33:03 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:30:14.606 21:33:03 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.606 21:33:03 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.606 21:33:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:14.606 21:33:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.606 21:33:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:14.606 21:33:03 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:14.606 21:33:03 -- common/autotest_common.sh@1327 -- # shift 00:30:14.606 21:33:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:14.606 21:33:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.606 21:33:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:14.606 21:33:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:14.606 21:33:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:14.606 21:33:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:14.606 21:33:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:14.606 21:33:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.606 21:33:03 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:14.606 21:33:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:14.606 21:33:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:14.606 21:33:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:14.606 21:33:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:14.606 21:33:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:14.606 21:33:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.865 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:14.865 fio-3.35 00:30:14.865 Starting 1 thread 00:30:17.426 00:30:17.426 test: (groupid=0, jobs=1): err= 0: pid=100011: Fri Apr 26 21:33:06 2024 00:30:17.426 read: IOPS=9895, BW=38.7MiB/s (40.5MB/s)(77.6MiB/2007msec) 00:30:17.426 slat (nsec): min=1523, max=665020, avg=2092.59, stdev=5995.17 00:30:17.426 clat (usec): min=4387, max=17406, avg=6763.99, stdev=774.22 00:30:17.426 lat (usec): min=4389, max=17408, avg=6766.08, stdev=774.54 00:30:17.426 clat percentiles (usec): 00:30:17.426 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6325], 00:30:17.426 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:30:17.426 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7373], 95.00th=[ 7570], 00:30:17.426 | 99.00th=[ 9241], 99.50th=[11076], 99.90th=[16909], 99.95th=[17171], 00:30:17.426 | 99.99th=[17433] 00:30:17.426 bw ( KiB/s): min=38648, max=40704, per=100.00%, avg=39606.00, stdev=911.54, samples=4 00:30:17.426 iops : min= 9662, max=10176, avg=9901.50, stdev=227.89, samples=4 00:30:17.426 write: IOPS=9916, BW=38.7MiB/s (40.6MB/s)(77.7MiB/2007msec); 0 zone resets 00:30:17.426 slat (nsec): min=1580, max=419007, avg=2140.48, stdev=3314.05 00:30:17.426 clat (usec): min=3581, max=12524, avg=6095.01, stdev=556.96 00:30:17.426 lat (usec): min=3583, max=12526, avg=6097.15, stdev=557.19 00:30:17.426 clat percentiles (usec): 00:30:17.426 | 1.00th=[ 4883], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5735], 00:30:17.426 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:30:17.426 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:30:17.426 | 99.00th=[ 7308], 99.50th=[ 8979], 99.90th=[10552], 99.95th=[11207], 00:30:17.426 | 99.99th=[12518] 00:30:17.426 bw ( KiB/s): min=39104, max=40192, per=100.00%, avg=39668.00, stdev=444.82, samples=4 00:30:17.426 iops : min= 9776, max=10048, avg=9917.00, stdev=111.21, samples=4 00:30:17.426 lat (msec) : 4=0.05%, 10=99.45%, 20=0.50% 00:30:17.426 cpu : usr=73.63%, sys=19.64%, ctx=36, majf=0, minf=5 00:30:17.426 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:17.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:17.426 issued rwts: total=19861,19903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.426 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:17.426 00:30:17.426 Run status group 0 (all jobs): 00:30:17.426 READ: bw=38.7MiB/s (40.5MB/s), 38.7MiB/s-38.7MiB/s (40.5MB/s-40.5MB/s), io=77.6MiB (81.3MB), run=2007-2007msec 00:30:17.426 WRITE: bw=38.7MiB/s (40.6MB/s), 38.7MiB/s-38.7MiB/s (40.6MB/s-40.6MB/s), io=77.7MiB (81.5MB), run=2007-2007msec 00:30:17.426 21:33:06 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:17.426 21:33:06 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:17.426 21:33:06 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:17.426 21:33:06 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:17.426 21:33:06 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:17.426 21:33:06 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:17.426 21:33:06 -- common/autotest_common.sh@1327 -- # shift 00:30:17.426 21:33:06 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:17.426 21:33:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.426 21:33:06 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:17.426 21:33:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:17.427 21:33:06 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:17.427 21:33:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:17.427 21:33:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:17.427 21:33:06 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.427 21:33:06 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:17.427 21:33:06 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:17.427 21:33:06 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:17.427 21:33:06 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:17.427 21:33:06 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:17.427 21:33:06 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:17.427 21:33:06 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:17.427 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:17.427 fio-3.35 00:30:17.427 Starting 1 thread 00:30:19.960 00:30:19.961 test: (groupid=0, jobs=1): err= 0: pid=100058: Fri Apr 26 21:33:08 2024 00:30:19.961 read: IOPS=9256, BW=145MiB/s (152MB/s)(290MiB/2008msec) 00:30:19.961 slat (usec): min=2, max=110, avg= 3.29, stdev= 1.93 00:30:19.961 clat (usec): min=1937, max=18020, avg=8155.11, stdev=2026.42 00:30:19.961 lat (usec): min=1940, max=18038, avg=8158.40, stdev=2026.67 00:30:19.961 clat percentiles (usec): 00:30:19.961 | 1.00th=[ 4178], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 6325], 00:30:19.961 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8848], 00:30:19.961 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11207], 00:30:19.961 | 99.00th=[13960], 99.50th=[15926], 99.90th=[17433], 99.95th=[17695], 00:30:19.961 | 99.99th=[17957] 00:30:19.961 bw ( KiB/s): min=70016, max=82304, per=49.54%, avg=73376.00, stdev=5965.12, samples=4 00:30:19.961 iops : min= 4376, max= 5144, avg=4586.00, stdev=372.82, samples=4 00:30:19.961 write: IOPS=5436, BW=84.9MiB/s (89.1MB/s)(149MiB/1755msec); 0 zone resets 00:30:19.961 slat (usec): min=29, max=556, avg=36.22, stdev=11.26 00:30:19.961 clat (usec): min=2037, max=19335, avg=10079.56, stdev=1947.21 00:30:19.961 lat (usec): min=2071, max=19490, avg=10115.78, stdev=1950.11 00:30:19.961 clat percentiles (usec): 00:30:19.961 | 1.00th=[ 6456], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8455], 00:30:19.961 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:30:19.961 | 70.00th=[10814], 80.00th=[11600], 90.00th=[12780], 95.00th=[13698], 00:30:19.961 | 99.00th=[15401], 99.50th=[16319], 99.90th=[18744], 99.95th=[19006], 00:30:19.961 | 99.99th=[19268] 00:30:19.961 bw ( KiB/s): min=71904, max=86016, per=87.15%, avg=75808.00, stdev=6821.18, samples=4 00:30:19.961 iops : min= 4494, max= 5376, avg=4738.00, stdev=426.32, samples=4 00:30:19.961 lat (msec) : 2=0.01%, 4=0.45%, 10=74.65%, 20=24.89% 00:30:19.961 cpu : usr=76.58%, sys=15.55%, ctx=25, majf=0, minf=2 00:30:19.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:30:19.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:19.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:19.961 issued rwts: total=18587,9541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:19.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:19.961 00:30:19.961 Run status group 0 (all jobs): 00:30:19.961 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=290MiB (305MB), run=2008-2008msec 00:30:19.961 WRITE: bw=84.9MiB/s (89.1MB/s), 84.9MiB/s-84.9MiB/s (89.1MB/s-89.1MB/s), io=149MiB (156MB), run=1755-1755msec 00:30:19.961 21:33:08 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.961 21:33:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.961 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.961 21:33:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.961 21:33:08 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:30:19.961 21:33:08 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:30:19.961 21:33:08 -- host/fio.sh@49 -- # get_nvme_bdfs 00:30:19.961 21:33:08 -- common/autotest_common.sh@1499 -- # bdfs=() 00:30:19.961 21:33:08 -- common/autotest_common.sh@1499 -- # local bdfs 00:30:19.961 21:33:08 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:19.961 21:33:08 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:19.961 21:33:08 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:30:19.961 21:33:08 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:30:19.961 21:33:08 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:19.961 21:33:08 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:30:19.961 21:33:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.961 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.961 Nvme0n1 00:30:19.961 21:33:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.961 21:33:08 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:19.961 21:33:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.961 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.961 21:33:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.961 21:33:08 -- host/fio.sh@51 -- # ls_guid=01eac4a4-3cfd-4723-a9ee-d587c78da594 00:30:19.961 21:33:08 -- host/fio.sh@52 -- # get_lvs_free_mb 01eac4a4-3cfd-4723-a9ee-d587c78da594 00:30:19.961 21:33:08 -- common/autotest_common.sh@1350 -- # local lvs_uuid=01eac4a4-3cfd-4723-a9ee-d587c78da594 00:30:19.961 21:33:08 -- common/autotest_common.sh@1351 -- # local lvs_info 00:30:19.961 21:33:08 -- common/autotest_common.sh@1352 -- # local fc 00:30:19.961 21:33:08 -- common/autotest_common.sh@1353 -- # local cs 00:30:19.961 21:33:08 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:19.961 21:33:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.961 21:33:08 -- common/autotest_common.sh@10 -- # set +x 00:30:19.961 21:33:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.961 21:33:08 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:30:19.961 { 00:30:19.961 "base_bdev": "Nvme0n1", 00:30:19.961 "block_size": 4096, 00:30:19.961 "cluster_size": 1073741824, 00:30:19.961 "free_clusters": 4, 00:30:19.961 "name": "lvs_0", 00:30:19.961 "total_data_clusters": 4, 00:30:19.961 "uuid": "01eac4a4-3cfd-4723-a9ee-d587c78da594" 00:30:19.961 } 00:30:19.961 ]' 00:30:19.961 21:33:08 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="01eac4a4-3cfd-4723-a9ee-d587c78da594") .free_clusters' 00:30:19.961 21:33:08 -- common/autotest_common.sh@1355 -- # fc=4 00:30:19.961 21:33:08 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="01eac4a4-3cfd-4723-a9ee-d587c78da594") .cluster_size' 00:30:19.961 21:33:09 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:30:19.961 21:33:09 -- common/autotest_common.sh@1359 -- # free_mb=4096 00:30:19.961 4096 00:30:19.961 21:33:09 -- common/autotest_common.sh@1360 -- # echo 4096 00:30:19.961 21:33:09 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:30:19.961 21:33:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.961 21:33:09 -- common/autotest_common.sh@10 -- # set +x 00:30:19.961 51d08e4e-bef6-4ea4-85cf-f10809d69db4 00:30:19.961 21:33:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.961 21:33:09 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:19.961 21:33:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.961 21:33:09 -- common/autotest_common.sh@10 -- # set +x 00:30:19.961 21:33:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.961 21:33:09 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:19.961 21:33:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.961 21:33:09 -- common/autotest_common.sh@10 -- # set +x 00:30:19.961 21:33:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.961 21:33:09 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:19.961 21:33:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:19.961 21:33:09 -- common/autotest_common.sh@10 -- # set +x 00:30:19.961 21:33:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:19.961 21:33:09 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:19.961 21:33:09 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:19.961 21:33:09 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:19.961 21:33:09 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:19.961 21:33:09 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:19.961 21:33:09 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:19.961 21:33:09 -- common/autotest_common.sh@1327 -- # shift 00:30:19.961 21:33:09 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:19.961 21:33:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.961 21:33:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:19.961 21:33:09 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:19.961 21:33:09 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:19.961 21:33:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:19.961 21:33:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:19.961 21:33:09 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.961 21:33:09 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:19.961 21:33:09 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:19.961 21:33:09 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:19.961 21:33:09 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:19.961 21:33:09 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:19.961 21:33:09 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:19.961 21:33:09 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:20.222 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:20.222 fio-3.35 00:30:20.222 Starting 1 thread 00:30:22.760 00:30:22.760 test: (groupid=0, jobs=1): err= 0: pid=100138: Fri Apr 26 21:33:11 2024 00:30:22.760 read: IOPS=6812, BW=26.6MiB/s (27.9MB/s)(53.4MiB/2008msec) 00:30:22.760 slat (nsec): min=1624, max=478246, avg=2210.88, stdev=5097.60 00:30:22.760 clat (usec): min=5467, max=17279, avg=9848.39, stdev=787.20 00:30:22.760 lat (usec): min=5481, max=17281, avg=9850.60, stdev=786.86 00:30:22.760 clat percentiles (usec): 00:30:22.760 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:30:22.760 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:30:22.760 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:22.760 | 99.00th=[11731], 99.50th=[12125], 99.90th=[15139], 99.95th=[16450], 00:30:22.760 | 99.99th=[17171] 00:30:22.760 bw ( KiB/s): min=26403, max=27736, per=99.94%, avg=27234.75, stdev=613.39, samples=4 00:30:22.760 iops : min= 6600, max= 6934, avg=6808.50, stdev=153.69, samples=4 00:30:22.760 write: IOPS=6818, BW=26.6MiB/s (27.9MB/s)(53.5MiB/2008msec); 0 zone resets 00:30:22.760 slat (nsec): min=1684, max=509511, avg=2284.39, stdev=4552.40 00:30:22.760 clat (usec): min=3629, max=17383, avg=8871.46, stdev=748.91 00:30:22.760 lat (usec): min=3647, max=17385, avg=8873.74, stdev=748.72 00:30:22.760 clat percentiles (usec): 00:30:22.760 | 1.00th=[ 7308], 5.00th=[ 7767], 10.00th=[ 8029], 20.00th=[ 8291], 00:30:22.760 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:30:22.760 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10028], 00:30:22.760 | 99.00th=[10552], 99.50th=[10945], 99.90th=[13829], 99.95th=[16319], 00:30:22.760 | 99.99th=[17433] 00:30:22.760 bw ( KiB/s): min=27008, max=27473, per=99.82%, avg=27226.25, stdev=222.20, samples=4 00:30:22.760 iops : min= 6752, max= 6868, avg=6806.50, stdev=55.46, samples=4 00:30:22.760 lat (msec) : 4=0.01%, 10=77.47%, 20=22.52% 00:30:22.760 cpu : usr=74.94%, sys=20.18%, ctx=5, majf=0, minf=5 00:30:22.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:22.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:22.760 issued rwts: total=13679,13692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:22.760 00:30:22.760 Run status group 0 (all jobs): 00:30:22.760 READ: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=53.4MiB (56.0MB), run=2008-2008msec 00:30:22.760 WRITE: bw=26.6MiB/s (27.9MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=53.5MiB (56.1MB), run=2008-2008msec 00:30:22.760 21:33:11 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:22.760 21:33:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.760 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.760 21:33:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.760 21:33:11 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:22.760 21:33:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.760 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.760 21:33:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.760 21:33:11 -- host/fio.sh@62 -- # ls_nested_guid=0cb4c742-ae27-4930-8af1-52b4a480f683 00:30:22.760 21:33:11 -- host/fio.sh@63 -- # get_lvs_free_mb 0cb4c742-ae27-4930-8af1-52b4a480f683 00:30:22.760 21:33:11 -- common/autotest_common.sh@1350 -- # local lvs_uuid=0cb4c742-ae27-4930-8af1-52b4a480f683 00:30:22.760 21:33:11 -- common/autotest_common.sh@1351 -- # local lvs_info 00:30:22.760 21:33:11 -- common/autotest_common.sh@1352 -- # local fc 00:30:22.760 21:33:11 -- common/autotest_common.sh@1353 -- # local cs 00:30:22.760 21:33:11 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:22.760 21:33:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.760 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.760 21:33:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.760 21:33:11 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:30:22.760 { 00:30:22.760 "base_bdev": "Nvme0n1", 00:30:22.760 "block_size": 4096, 00:30:22.760 "cluster_size": 1073741824, 00:30:22.760 "free_clusters": 0, 00:30:22.760 "name": "lvs_0", 00:30:22.760 "total_data_clusters": 4, 00:30:22.760 "uuid": "01eac4a4-3cfd-4723-a9ee-d587c78da594" 00:30:22.760 }, 00:30:22.760 { 00:30:22.760 "base_bdev": "51d08e4e-bef6-4ea4-85cf-f10809d69db4", 00:30:22.760 "block_size": 4096, 00:30:22.761 "cluster_size": 4194304, 00:30:22.761 "free_clusters": 1022, 00:30:22.761 "name": "lvs_n_0", 00:30:22.761 "total_data_clusters": 1022, 00:30:22.761 "uuid": "0cb4c742-ae27-4930-8af1-52b4a480f683" 00:30:22.761 } 00:30:22.761 ]' 00:30:22.761 21:33:11 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="0cb4c742-ae27-4930-8af1-52b4a480f683") .free_clusters' 00:30:22.761 21:33:11 -- common/autotest_common.sh@1355 -- # fc=1022 00:30:22.761 21:33:11 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="0cb4c742-ae27-4930-8af1-52b4a480f683") .cluster_size' 00:30:22.761 21:33:11 -- common/autotest_common.sh@1356 -- # cs=4194304 00:30:22.761 21:33:11 -- common/autotest_common.sh@1359 -- # free_mb=4088 00:30:22.761 21:33:11 -- common/autotest_common.sh@1360 -- # echo 4088 00:30:22.761 4088 00:30:22.761 21:33:11 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:30:22.761 21:33:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.761 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.761 09ead27a-d850-4aa3-891e-2b92cf291bb3 00:30:22.761 21:33:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.761 21:33:11 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:22.761 21:33:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.761 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.761 21:33:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.761 21:33:11 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:22.761 21:33:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.761 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.761 21:33:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.761 21:33:11 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:22.761 21:33:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:22.761 21:33:11 -- common/autotest_common.sh@10 -- # set +x 00:30:22.761 21:33:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:22.761 21:33:11 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.761 21:33:11 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.761 21:33:11 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:22.761 21:33:11 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:22.761 21:33:11 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:22.761 21:33:11 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:22.761 21:33:11 -- common/autotest_common.sh@1327 -- # shift 00:30:22.761 21:33:11 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:22.761 21:33:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.761 21:33:11 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:22.761 21:33:11 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:22.761 21:33:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:22.761 21:33:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:22.761 21:33:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:22.761 21:33:11 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.761 21:33:11 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:22.761 21:33:11 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:22.761 21:33:11 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:22.761 21:33:11 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:22.761 21:33:11 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:22.761 21:33:11 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:22.761 21:33:11 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:22.761 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:22.761 fio-3.35 00:30:22.761 Starting 1 thread 00:30:25.296 00:30:25.296 test: (groupid=0, jobs=1): err= 0: pid=100193: Fri Apr 26 21:33:14 2024 00:30:25.296 read: IOPS=6136, BW=24.0MiB/s (25.1MB/s)(48.2MiB/2009msec) 00:30:25.296 slat (nsec): min=1826, max=442524, avg=2289.83, stdev=5002.75 00:30:25.296 clat (usec): min=4415, max=17920, avg=10976.57, stdev=915.03 00:30:25.296 lat (usec): min=4429, max=17923, avg=10978.86, stdev=914.61 00:30:25.296 clat percentiles (usec): 00:30:25.296 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:30:25.296 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:30:25.296 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[12387], 00:30:25.296 | 99.00th=[13173], 99.50th=[13566], 99.90th=[16712], 99.95th=[17433], 00:30:25.296 | 99.99th=[17957] 00:30:25.296 bw ( KiB/s): min=23552, max=24992, per=99.92%, avg=24528.00, stdev=670.86, samples=4 00:30:25.296 iops : min= 5888, max= 6248, avg=6132.00, stdev=167.71, samples=4 00:30:25.296 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(48.0MiB/2009msec); 0 zone resets 00:30:25.296 slat (nsec): min=1892, max=351754, avg=2427.02, stdev=3499.28 00:30:25.296 clat (usec): min=3395, max=17341, avg=9809.10, stdev=848.39 00:30:25.297 lat (usec): min=3414, max=17343, avg=9811.53, stdev=848.09 00:30:25.297 clat percentiles (usec): 00:30:25.297 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:30:25.297 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:30:25.297 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:25.297 | 99.00th=[11600], 99.50th=[11863], 99.90th=[16319], 99.95th=[16581], 00:30:25.297 | 99.99th=[16909] 00:30:25.297 bw ( KiB/s): min=24384, max=24584, per=99.94%, avg=24466.00, stdev=99.14, samples=4 00:30:25.297 iops : min= 6096, max= 6146, avg=6116.50, stdev=24.79, samples=4 00:30:25.297 lat (msec) : 4=0.02%, 10=35.87%, 20=64.11% 00:30:25.297 cpu : usr=76.59%, sys=18.53%, ctx=7, majf=0, minf=5 00:30:25.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:25.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:25.297 issued rwts: total=12329,12295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:25.297 00:30:25.297 Run status group 0 (all jobs): 00:30:25.297 READ: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=48.2MiB (50.5MB), run=2009-2009msec 00:30:25.297 WRITE: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.0MiB (50.4MB), run=2009-2009msec 00:30:25.297 21:33:14 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:25.297 21:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.297 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:30:25.297 21:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.297 21:33:14 -- host/fio.sh@72 -- # sync 00:30:25.297 21:33:14 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:25.297 21:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.297 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:30:25.297 21:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.297 21:33:14 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:30:25.297 21:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.297 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:30:25.297 21:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.297 21:33:14 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:30:25.297 21:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.297 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:30:25.297 21:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.297 21:33:14 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:30:25.297 21:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.297 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:30:25.297 21:33:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:25.297 21:33:14 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:30:25.297 21:33:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:25.297 21:33:14 -- common/autotest_common.sh@10 -- # set +x 00:30:27.202 21:33:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:27.202 21:33:16 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:30:27.202 21:33:16 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:30:27.202 21:33:16 -- host/fio.sh@84 -- # nvmftestfini 00:30:27.202 21:33:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:27.202 21:33:16 -- nvmf/common.sh@117 -- # sync 00:30:27.202 21:33:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:27.202 21:33:16 -- nvmf/common.sh@120 -- # set +e 00:30:27.202 21:33:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:27.202 21:33:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:27.202 rmmod nvme_tcp 00:30:27.202 rmmod nvme_fabrics 00:30:27.202 rmmod nvme_keyring 00:30:27.202 21:33:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:27.202 21:33:16 -- nvmf/common.sh@124 -- # set -e 00:30:27.202 21:33:16 -- nvmf/common.sh@125 -- # return 0 00:30:27.202 21:33:16 -- nvmf/common.sh@478 -- # '[' -n 99932 ']' 00:30:27.202 21:33:16 -- nvmf/common.sh@479 -- # killprocess 99932 00:30:27.202 21:33:16 -- common/autotest_common.sh@936 -- # '[' -z 99932 ']' 00:30:27.202 21:33:16 -- common/autotest_common.sh@940 -- # kill -0 99932 00:30:27.202 21:33:16 -- common/autotest_common.sh@941 -- # uname 00:30:27.202 21:33:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:27.202 21:33:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99932 00:30:27.202 killing process with pid 99932 00:30:27.202 21:33:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:27.202 21:33:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:27.202 21:33:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99932' 00:30:27.202 21:33:16 -- common/autotest_common.sh@955 -- # kill 99932 00:30:27.202 21:33:16 -- common/autotest_common.sh@960 -- # wait 99932 00:30:27.461 21:33:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:27.461 21:33:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:27.461 21:33:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:27.461 21:33:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:27.461 21:33:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:27.461 21:33:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.461 21:33:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.461 21:33:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.461 21:33:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:27.461 00:30:27.461 real 0m14.591s 00:30:27.461 user 1m1.273s 00:30:27.461 sys 0m3.170s 00:30:27.461 21:33:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:27.461 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.461 ************************************ 00:30:27.461 END TEST nvmf_fio_host 00:30:27.461 ************************************ 00:30:27.721 21:33:16 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:27.721 21:33:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:27.721 21:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:27.721 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:30:27.721 ************************************ 00:30:27.721 START TEST nvmf_failover 00:30:27.721 ************************************ 00:30:27.721 21:33:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:27.721 * Looking for test storage... 00:30:27.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:27.721 21:33:16 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:27.981 21:33:16 -- nvmf/common.sh@7 -- # uname -s 00:30:27.981 21:33:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.981 21:33:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.981 21:33:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.981 21:33:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.981 21:33:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.981 21:33:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.981 21:33:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.981 21:33:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.981 21:33:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.981 21:33:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.981 21:33:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:30:27.981 21:33:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:30:27.981 21:33:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.981 21:33:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.981 21:33:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:27.981 21:33:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.981 21:33:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:27.981 21:33:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.981 21:33:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.981 21:33:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.981 21:33:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.982 21:33:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.982 21:33:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.982 21:33:16 -- paths/export.sh@5 -- # export PATH 00:30:27.982 21:33:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.982 21:33:16 -- nvmf/common.sh@47 -- # : 0 00:30:27.982 21:33:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.982 21:33:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.982 21:33:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.982 21:33:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.982 21:33:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.982 21:33:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.982 21:33:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.982 21:33:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.982 21:33:17 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:27.982 21:33:17 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:27.982 21:33:17 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:27.982 21:33:17 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:27.982 21:33:17 -- host/failover.sh@18 -- # nvmftestinit 00:30:27.982 21:33:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:27.982 21:33:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.982 21:33:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:27.982 21:33:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:27.982 21:33:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:27.982 21:33:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.982 21:33:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:27.982 21:33:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.982 21:33:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:27.982 21:33:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:27.982 21:33:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:27.982 21:33:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:27.982 21:33:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:27.982 21:33:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:27.982 21:33:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.982 21:33:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.982 21:33:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:27.982 21:33:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:27.982 21:33:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:27.982 21:33:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:27.982 21:33:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:27.982 21:33:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.982 21:33:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:27.982 21:33:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:27.982 21:33:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:27.982 21:33:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:27.982 21:33:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:27.982 21:33:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:27.982 Cannot find device "nvmf_tgt_br" 00:30:27.982 21:33:17 -- nvmf/common.sh@155 -- # true 00:30:27.982 21:33:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:27.982 Cannot find device "nvmf_tgt_br2" 00:30:27.982 21:33:17 -- nvmf/common.sh@156 -- # true 00:30:27.982 21:33:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:27.982 21:33:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:27.982 Cannot find device "nvmf_tgt_br" 00:30:27.982 21:33:17 -- nvmf/common.sh@158 -- # true 00:30:27.982 21:33:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:27.982 Cannot find device "nvmf_tgt_br2" 00:30:27.982 21:33:17 -- nvmf/common.sh@159 -- # true 00:30:27.982 21:33:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:27.982 21:33:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:27.982 21:33:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:27.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:27.982 21:33:17 -- nvmf/common.sh@162 -- # true 00:30:27.982 21:33:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:27.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:27.982 21:33:17 -- nvmf/common.sh@163 -- # true 00:30:27.982 21:33:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:27.982 21:33:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:27.982 21:33:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:27.982 21:33:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:27.982 21:33:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:27.982 21:33:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:28.242 21:33:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:28.242 21:33:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:28.242 21:33:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:28.242 21:33:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:28.242 21:33:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:28.242 21:33:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:28.242 21:33:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:28.242 21:33:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:28.242 21:33:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:28.242 21:33:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:28.242 21:33:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:28.242 21:33:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:28.242 21:33:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:28.242 21:33:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:28.242 21:33:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:28.242 21:33:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:28.242 21:33:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:28.242 21:33:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:28.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:30:28.242 00:30:28.242 --- 10.0.0.2 ping statistics --- 00:30:28.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.243 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:30:28.243 21:33:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:28.243 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:28.243 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:30:28.243 00:30:28.243 --- 10.0.0.3 ping statistics --- 00:30:28.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.243 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:30:28.243 21:33:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:28.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:30:28.243 00:30:28.243 --- 10.0.0.1 ping statistics --- 00:30:28.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.243 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:30:28.243 21:33:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.243 21:33:17 -- nvmf/common.sh@422 -- # return 0 00:30:28.243 21:33:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:28.243 21:33:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.243 21:33:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:28.243 21:33:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:28.243 21:33:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.243 21:33:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:28.243 21:33:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:28.243 21:33:17 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:28.243 21:33:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:28.243 21:33:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:28.243 21:33:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.243 21:33:17 -- nvmf/common.sh@470 -- # nvmfpid=100435 00:30:28.243 21:33:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:28.243 21:33:17 -- nvmf/common.sh@471 -- # waitforlisten 100435 00:30:28.243 21:33:17 -- common/autotest_common.sh@817 -- # '[' -z 100435 ']' 00:30:28.243 21:33:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.243 21:33:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:28.243 21:33:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.243 21:33:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:28.243 21:33:17 -- common/autotest_common.sh@10 -- # set +x 00:30:28.243 [2024-04-26 21:33:17.444806] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:30:28.243 [2024-04-26 21:33:17.444894] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.503 [2024-04-26 21:33:17.584857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:28.503 [2024-04-26 21:33:17.637283] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.503 [2024-04-26 21:33:17.637346] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.503 [2024-04-26 21:33:17.637353] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.503 [2024-04-26 21:33:17.637358] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.503 [2024-04-26 21:33:17.637362] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.503 [2024-04-26 21:33:17.637574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.503 [2024-04-26 21:33:17.637607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.503 [2024-04-26 21:33:17.637608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.442 21:33:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:29.442 21:33:18 -- common/autotest_common.sh@850 -- # return 0 00:30:29.442 21:33:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:29.442 21:33:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:29.442 21:33:18 -- common/autotest_common.sh@10 -- # set +x 00:30:29.442 21:33:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.442 21:33:18 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:29.442 [2024-04-26 21:33:18.589935] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.442 21:33:18 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:29.703 Malloc0 00:30:29.703 21:33:18 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:29.962 21:33:19 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:30.221 21:33:19 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.480 [2024-04-26 21:33:19.511853] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.480 21:33:19 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:30.740 [2024-04-26 21:33:19.735606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:30.741 21:33:19 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:30.741 [2024-04-26 21:33:19.983411] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:31.002 21:33:20 -- host/failover.sh@31 -- # bdevperf_pid=100541 00:30:31.002 21:33:20 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:31.002 21:33:20 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:31.002 21:33:20 -- host/failover.sh@34 -- # waitforlisten 100541 /var/tmp/bdevperf.sock 00:30:31.002 21:33:20 -- common/autotest_common.sh@817 -- # '[' -z 100541 ']' 00:30:31.002 21:33:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:31.002 21:33:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:31.002 21:33:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:31.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:31.002 21:33:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:31.002 21:33:20 -- common/autotest_common.sh@10 -- # set +x 00:30:31.934 21:33:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:31.934 21:33:20 -- common/autotest_common.sh@850 -- # return 0 00:30:31.934 21:33:20 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.191 NVMe0n1 00:30:32.191 21:33:21 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.448 00:30:32.448 21:33:21 -- host/failover.sh@39 -- # run_test_pid=100589 00:30:32.448 21:33:21 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:32.448 21:33:21 -- host/failover.sh@41 -- # sleep 1 00:30:33.382 21:33:22 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:33.642 [2024-04-26 21:33:22.778166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.642 [2024-04-26 21:33:22.778468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 [2024-04-26 21:33:22.778541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28940 is same with the state(5) to be set 00:30:33.643 21:33:22 -- host/failover.sh@45 -- # sleep 3 00:30:36.957 21:33:25 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:36.957 00:30:36.957 21:33:26 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:37.217 [2024-04-26 21:33:26.331206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.331964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.332033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.332087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.332144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.332189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.332255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.332314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.332383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.332437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 [2024-04-26 21:33:26.332492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b294e0 is same with the state(5) to be set 00:30:37.217 21:33:26 -- host/failover.sh@50 -- # sleep 3 00:30:40.510 21:33:29 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.510 [2024-04-26 21:33:29.584521] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.510 21:33:29 -- host/failover.sh@55 -- # sleep 1 00:30:41.446 21:33:30 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:41.706 [2024-04-26 21:33:30.816652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.706 [2024-04-26 21:33:30.816792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.706 [2024-04-26 21:33:30.816830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.816999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 [2024-04-26 21:33:30.817235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19805d0 is same with the state(5) to be set 00:30:41.707 21:33:30 -- host/failover.sh@59 -- # wait 100589 00:30:48.288 0 00:30:48.288 21:33:36 -- host/failover.sh@61 -- # killprocess 100541 00:30:48.288 21:33:36 -- common/autotest_common.sh@936 -- # '[' -z 100541 ']' 00:30:48.288 21:33:36 -- common/autotest_common.sh@940 -- # kill -0 100541 00:30:48.288 21:33:36 -- common/autotest_common.sh@941 -- # uname 00:30:48.288 21:33:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:48.288 21:33:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100541 00:30:48.288 killing process with pid 100541 00:30:48.288 21:33:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:48.288 21:33:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:48.288 21:33:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100541' 00:30:48.288 21:33:36 -- common/autotest_common.sh@955 -- # kill 100541 00:30:48.288 21:33:36 -- common/autotest_common.sh@960 -- # wait 100541 00:30:48.288 21:33:36 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:48.288 [2024-04-26 21:33:20.058140] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:30:48.288 [2024-04-26 21:33:20.058241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100541 ] 00:30:48.288 [2024-04-26 21:33:20.198140] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.288 [2024-04-26 21:33:20.248380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.288 Running I/O for 15 seconds... 00:30:48.288 [2024-04-26 21:33:22.778752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.778797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.778819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.778830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.778843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.778853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.778865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.778875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.778887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.778897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.778909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.778919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.778931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.778942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.778954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.778964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.778976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.778986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.778998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.288 [2024-04-26 21:33:22.779509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.288 [2024-04-26 21:33:22.779519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.779981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.779993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.780003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.780024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.780045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.289 [2024-04-26 21:33:22.780068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.289 [2024-04-26 21:33:22.780090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.289 [2024-04-26 21:33:22.780111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.289 [2024-04-26 21:33:22.780132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.289 [2024-04-26 21:33:22.780153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.289 [2024-04-26 21:33:22.780175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.289 [2024-04-26 21:33:22.780201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.289 [2024-04-26 21:33:22.780222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.289 [2024-04-26 21:33:22.780244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.780266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.780288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.780314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.780344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.780366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.289 [2024-04-26 21:33:22.780387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.289 [2024-04-26 21:33:22.780399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.780990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.780999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.290 [2024-04-26 21:33:22.781263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.290 [2024-04-26 21:33:22.781274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:22.781680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x855690 is same with the state(5) to be set 00:30:48.291 [2024-04-26 21:33:22.781704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.291 [2024-04-26 21:33:22.781711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.291 [2024-04-26 21:33:22.781719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89880 len:8 PRP1 0x0 PRP2 0x0 00:30:48.291 [2024-04-26 21:33:22.781730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781787] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x855690 was disconnected and freed. reset controller. 00:30:48.291 [2024-04-26 21:33:22.781828] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:48.291 [2024-04-26 21:33:22.781882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.291 [2024-04-26 21:33:22.781895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.291 [2024-04-26 21:33:22.781916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.291 [2024-04-26 21:33:22.781937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.291 [2024-04-26 21:33:22.781958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:22.781967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.291 [2024-04-26 21:33:22.782001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x837510 (9): Bad file descriptor 00:30:48.291 [2024-04-26 21:33:22.785439] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.291 [2024-04-26 21:33:22.817222] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.291 [2024-04-26 21:33:26.332622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.291 [2024-04-26 21:33:26.332668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.291 [2024-04-26 21:33:26.332726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.291 [2024-04-26 21:33:26.332749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.332978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.332988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.291 [2024-04-26 21:33:26.333005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.291 [2024-04-26 21:33:26.333015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.292 [2024-04-26 21:33:26.333636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.292 [2024-04-26 21:33:26.333658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.292 [2024-04-26 21:33:26.333681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.292 [2024-04-26 21:33:26.333693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.292 [2024-04-26 21:33:26.333703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.333987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.333999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.293 [2024-04-26 21:33:26.334618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.293 [2024-04-26 21:33:26.334627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.294 [2024-04-26 21:33:26.334855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.334875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.334896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.334916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.334938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.334958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.334978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.334993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.294 [2024-04-26 21:33:26.335458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.294 [2024-04-26 21:33:26.335467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:26.335478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:26.335487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:26.335499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:26.335508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:26.335519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8439b0 is same with the state(5) to be set 00:30:48.295 [2024-04-26 21:33:26.335614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.295 [2024-04-26 21:33:26.335622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.295 [2024-04-26 21:33:26.335630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117776 len:8 PRP1 0x0 PRP2 0x0 00:30:48.295 [2024-04-26 21:33:26.335639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:26.335686] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8439b0 was disconnected and freed. reset controller. 00:30:48.295 [2024-04-26 21:33:26.335698] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:48.295 [2024-04-26 21:33:26.335741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.295 [2024-04-26 21:33:26.335753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:26.335766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.295 [2024-04-26 21:33:26.335775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:26.335787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.295 [2024-04-26 21:33:26.335797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:26.335807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.295 [2024-04-26 21:33:26.335816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:26.335826] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.295 [2024-04-26 21:33:26.338976] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.295 [2024-04-26 21:33:26.339014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x837510 (9): Bad file descriptor 00:30:48.295 [2024-04-26 21:33:26.368378] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.295 [2024-04-26 21:33:30.818012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.295 [2024-04-26 21:33:30.818424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.295 [2024-04-26 21:33:30.818711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.295 [2024-04-26 21:33:30.818723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.818738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.818760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.818781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.818814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.296 [2024-04-26 21:33:30.818834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.296 [2024-04-26 21:33:30.818854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.296 [2024-04-26 21:33:30.818875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.296 [2024-04-26 21:33:30.818895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.296 [2024-04-26 21:33:30.818915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.296 [2024-04-26 21:33:30.818935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.296 [2024-04-26 21:33:30.818955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:48.296 [2024-04-26 21:33:30.818975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.818986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.818995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.296 [2024-04-26 21:33:30.819556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.296 [2024-04-26 21:33:30.819565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.819985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.819996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.297 [2024-04-26 21:33:30.820274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.297 [2024-04-26 21:33:30.820284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.298 [2024-04-26 21:33:30.820293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.298 [2024-04-26 21:33:30.820314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:48.298 [2024-04-26 21:33:30.820350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93840 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93848 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93856 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93864 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93872 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93880 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93888 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93896 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93904 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93912 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93920 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93928 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93936 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93944 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93952 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93960 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.820897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.820903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.820910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93968 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.820918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.841526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.841568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.841580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93976 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.841596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.841609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.841617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.841627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93984 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.841640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.841653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.841661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.841671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93992 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.841686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.841699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.841707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.841717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94000 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.841729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.841742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:48.298 [2024-04-26 21:33:30.841766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:48.298 [2024-04-26 21:33:30.841776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94008 len:8 PRP1 0x0 PRP2 0x0 00:30:48.298 [2024-04-26 21:33:30.841789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.298 [2024-04-26 21:33:30.841849] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8439b0 was disconnected and freed. reset controller. 00:30:48.299 [2024-04-26 21:33:30.841866] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:48.299 [2024-04-26 21:33:30.841936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.299 [2024-04-26 21:33:30.841970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.299 [2024-04-26 21:33:30.841986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.299 [2024-04-26 21:33:30.841998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.299 [2024-04-26 21:33:30.842011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.299 [2024-04-26 21:33:30.842024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.299 [2024-04-26 21:33:30.842037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:48.299 [2024-04-26 21:33:30.842049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:48.299 [2024-04-26 21:33:30.842061] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:48.299 [2024-04-26 21:33:30.842102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x837510 (9): Bad file descriptor 00:30:48.299 [2024-04-26 21:33:30.847536] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:48.299 [2024-04-26 21:33:30.876008] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:48.299 00:30:48.299 Latency(us) 00:30:48.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.299 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:48.299 Verification LBA range: start 0x0 length 0x4000 00:30:48.299 NVMe0n1 : 15.01 9997.82 39.05 223.19 0.00 12497.06 447.16 49910.39 00:30:48.299 =================================================================================================================== 00:30:48.299 Total : 9997.82 39.05 223.19 0.00 12497.06 447.16 49910.39 00:30:48.299 Received shutdown signal, test time was about 15.000000 seconds 00:30:48.299 00:30:48.299 Latency(us) 00:30:48.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.299 =================================================================================================================== 00:30:48.299 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.299 21:33:36 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:48.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:48.299 21:33:36 -- host/failover.sh@65 -- # count=3 00:30:48.299 21:33:36 -- host/failover.sh@67 -- # (( count != 3 )) 00:30:48.299 21:33:36 -- host/failover.sh@73 -- # bdevperf_pid=100792 00:30:48.299 21:33:36 -- host/failover.sh@75 -- # waitforlisten 100792 /var/tmp/bdevperf.sock 00:30:48.299 21:33:36 -- common/autotest_common.sh@817 -- # '[' -z 100792 ']' 00:30:48.299 21:33:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:48.299 21:33:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:48.299 21:33:36 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:48.299 21:33:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:48.299 21:33:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:48.299 21:33:36 -- common/autotest_common.sh@10 -- # set +x 00:30:48.563 21:33:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:48.563 21:33:37 -- common/autotest_common.sh@850 -- # return 0 00:30:48.563 21:33:37 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:48.830 [2024-04-26 21:33:38.003680] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:48.830 21:33:38 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:49.099 [2024-04-26 21:33:38.223465] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:49.099 21:33:38 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.367 NVMe0n1 00:30:49.367 21:33:38 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.637 00:30:49.637 21:33:38 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.910 00:30:49.910 21:33:39 -- host/failover.sh@82 -- # grep -q NVMe0 00:30:49.910 21:33:39 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:50.174 21:33:39 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:50.432 21:33:39 -- host/failover.sh@87 -- # sleep 3 00:30:53.723 21:33:42 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.723 21:33:42 -- host/failover.sh@88 -- # grep -q NVMe0 00:30:53.723 21:33:42 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:53.723 21:33:42 -- host/failover.sh@90 -- # run_test_pid=100930 00:30:53.723 21:33:42 -- host/failover.sh@92 -- # wait 100930 00:30:55.101 0 00:30:55.101 21:33:43 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:55.101 [2024-04-26 21:33:36.951382] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:30:55.101 [2024-04-26 21:33:36.951466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100792 ] 00:30:55.101 [2024-04-26 21:33:37.091841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.101 [2024-04-26 21:33:37.141732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.101 [2024-04-26 21:33:39.560017] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:55.101 [2024-04-26 21:33:39.560129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.101 [2024-04-26 21:33:39.560148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.101 [2024-04-26 21:33:39.560162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.101 [2024-04-26 21:33:39.560173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.101 [2024-04-26 21:33:39.560183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.101 [2024-04-26 21:33:39.560193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.101 [2024-04-26 21:33:39.560204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.101 [2024-04-26 21:33:39.560213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.101 [2024-04-26 21:33:39.560224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.101 [2024-04-26 21:33:39.560263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.101 [2024-04-26 21:33:39.560284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e2510 (9): Bad file descriptor 00:30:55.101 [2024-04-26 21:33:39.567304] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:55.101 Running I/O for 1 seconds... 00:30:55.101 00:30:55.101 Latency(us) 00:30:55.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.101 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.101 Verification LBA range: start 0x0 length 0x4000 00:30:55.101 NVMe0n1 : 1.01 10182.25 39.77 0.00 0.00 12512.88 1738.56 14022.99 00:30:55.101 =================================================================================================================== 00:30:55.101 Total : 10182.25 39.77 0.00 0.00 12512.88 1738.56 14022.99 00:30:55.101 21:33:43 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.101 21:33:43 -- host/failover.sh@95 -- # grep -q NVMe0 00:30:55.101 21:33:44 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.360 21:33:44 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:55.360 21:33:44 -- host/failover.sh@99 -- # grep -q NVMe0 00:30:55.360 21:33:44 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:55.619 21:33:44 -- host/failover.sh@101 -- # sleep 3 00:30:58.908 21:33:47 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.908 21:33:47 -- host/failover.sh@103 -- # grep -q NVMe0 00:30:58.908 21:33:48 -- host/failover.sh@108 -- # killprocess 100792 00:30:58.908 21:33:48 -- common/autotest_common.sh@936 -- # '[' -z 100792 ']' 00:30:58.908 21:33:48 -- common/autotest_common.sh@940 -- # kill -0 100792 00:30:58.908 21:33:48 -- common/autotest_common.sh@941 -- # uname 00:30:58.908 21:33:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:58.908 21:33:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100792 00:30:58.908 killing process with pid 100792 00:30:58.908 21:33:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:58.908 21:33:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:58.908 21:33:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100792' 00:30:58.908 21:33:48 -- common/autotest_common.sh@955 -- # kill 100792 00:30:58.908 21:33:48 -- common/autotest_common.sh@960 -- # wait 100792 00:30:59.167 21:33:48 -- host/failover.sh@110 -- # sync 00:30:59.167 21:33:48 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:59.426 21:33:48 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:59.426 21:33:48 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:59.426 21:33:48 -- host/failover.sh@116 -- # nvmftestfini 00:30:59.426 21:33:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:59.426 21:33:48 -- nvmf/common.sh@117 -- # sync 00:30:59.426 21:33:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:59.426 21:33:48 -- nvmf/common.sh@120 -- # set +e 00:30:59.426 21:33:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:59.426 21:33:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:59.426 rmmod nvme_tcp 00:30:59.426 rmmod nvme_fabrics 00:30:59.426 rmmod nvme_keyring 00:30:59.426 21:33:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:59.426 21:33:48 -- nvmf/common.sh@124 -- # set -e 00:30:59.426 21:33:48 -- nvmf/common.sh@125 -- # return 0 00:30:59.426 21:33:48 -- nvmf/common.sh@478 -- # '[' -n 100435 ']' 00:30:59.426 21:33:48 -- nvmf/common.sh@479 -- # killprocess 100435 00:30:59.426 21:33:48 -- common/autotest_common.sh@936 -- # '[' -z 100435 ']' 00:30:59.426 21:33:48 -- common/autotest_common.sh@940 -- # kill -0 100435 00:30:59.426 21:33:48 -- common/autotest_common.sh@941 -- # uname 00:30:59.426 21:33:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:59.426 21:33:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100435 00:30:59.426 21:33:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:30:59.426 21:33:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:30:59.426 killing process with pid 100435 00:30:59.426 21:33:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100435' 00:30:59.426 21:33:48 -- common/autotest_common.sh@955 -- # kill 100435 00:30:59.426 21:33:48 -- common/autotest_common.sh@960 -- # wait 100435 00:30:59.685 21:33:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:59.685 21:33:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:59.685 21:33:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:59.685 21:33:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:59.685 21:33:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:59.685 21:33:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.685 21:33:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:59.685 21:33:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.685 21:33:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:59.685 00:30:59.685 real 0m32.086s 00:30:59.685 user 2m5.078s 00:30:59.685 sys 0m3.853s 00:30:59.685 21:33:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:59.685 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:30:59.685 ************************************ 00:30:59.685 END TEST nvmf_failover 00:30:59.685 ************************************ 00:30:59.945 21:33:48 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:59.945 21:33:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:59.945 21:33:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:59.945 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:30:59.945 ************************************ 00:30:59.945 START TEST nvmf_discovery 00:30:59.945 ************************************ 00:30:59.945 21:33:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:59.945 * Looking for test storage... 00:30:59.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:59.945 21:33:49 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:59.945 21:33:49 -- nvmf/common.sh@7 -- # uname -s 00:31:00.204 21:33:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.204 21:33:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.204 21:33:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.204 21:33:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.204 21:33:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.204 21:33:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.204 21:33:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.204 21:33:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.204 21:33:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.204 21:33:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.204 21:33:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:31:00.204 21:33:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:31:00.204 21:33:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.204 21:33:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.204 21:33:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:00.204 21:33:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.204 21:33:49 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:00.204 21:33:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.204 21:33:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.204 21:33:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.204 21:33:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.204 21:33:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.204 21:33:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.204 21:33:49 -- paths/export.sh@5 -- # export PATH 00:31:00.204 21:33:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.204 21:33:49 -- nvmf/common.sh@47 -- # : 0 00:31:00.204 21:33:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:00.204 21:33:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:00.204 21:33:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.204 21:33:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.204 21:33:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.204 21:33:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:00.204 21:33:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:00.204 21:33:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:00.204 21:33:49 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:00.204 21:33:49 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:00.204 21:33:49 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:00.204 21:33:49 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:00.204 21:33:49 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:00.204 21:33:49 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:00.204 21:33:49 -- host/discovery.sh@25 -- # nvmftestinit 00:31:00.204 21:33:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:00.204 21:33:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.204 21:33:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:00.204 21:33:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:00.204 21:33:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:00.204 21:33:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.204 21:33:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.204 21:33:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.204 21:33:49 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:00.204 21:33:49 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:00.204 21:33:49 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:00.204 21:33:49 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:00.204 21:33:49 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:00.204 21:33:49 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:00.204 21:33:49 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:00.204 21:33:49 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.204 21:33:49 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:00.204 21:33:49 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:00.204 21:33:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:00.204 21:33:49 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:00.204 21:33:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:00.204 21:33:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.204 21:33:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:00.204 21:33:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:00.204 21:33:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:00.204 21:33:49 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:00.204 21:33:49 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:00.204 21:33:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:00.204 Cannot find device "nvmf_tgt_br" 00:31:00.204 21:33:49 -- nvmf/common.sh@155 -- # true 00:31:00.204 21:33:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:00.204 Cannot find device "nvmf_tgt_br2" 00:31:00.204 21:33:49 -- nvmf/common.sh@156 -- # true 00:31:00.205 21:33:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:00.205 21:33:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:00.205 Cannot find device "nvmf_tgt_br" 00:31:00.205 21:33:49 -- nvmf/common.sh@158 -- # true 00:31:00.205 21:33:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:00.205 Cannot find device "nvmf_tgt_br2" 00:31:00.205 21:33:49 -- nvmf/common.sh@159 -- # true 00:31:00.205 21:33:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:00.205 21:33:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:00.205 21:33:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:00.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:00.205 21:33:49 -- nvmf/common.sh@162 -- # true 00:31:00.205 21:33:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:00.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:00.205 21:33:49 -- nvmf/common.sh@163 -- # true 00:31:00.205 21:33:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:00.205 21:33:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:00.205 21:33:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:00.205 21:33:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:00.205 21:33:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:00.205 21:33:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:00.464 21:33:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:00.464 21:33:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:00.464 21:33:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:00.464 21:33:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:00.464 21:33:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:00.464 21:33:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:00.464 21:33:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:00.464 21:33:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:00.464 21:33:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:00.464 21:33:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:00.464 21:33:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:00.464 21:33:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:00.464 21:33:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:00.464 21:33:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:00.464 21:33:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:00.464 21:33:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:00.464 21:33:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:00.464 21:33:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:00.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:31:00.464 00:31:00.464 --- 10.0.0.2 ping statistics --- 00:31:00.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.464 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:31:00.464 21:33:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:00.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:00.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:31:00.464 00:31:00.464 --- 10.0.0.3 ping statistics --- 00:31:00.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.464 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:31:00.464 21:33:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:00.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:31:00.464 00:31:00.464 --- 10.0.0.1 ping statistics --- 00:31:00.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.464 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:31:00.464 21:33:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.464 21:33:49 -- nvmf/common.sh@422 -- # return 0 00:31:00.464 21:33:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:00.464 21:33:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.464 21:33:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:00.464 21:33:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:00.464 21:33:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.464 21:33:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:00.464 21:33:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:00.464 21:33:49 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:00.464 21:33:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:00.464 21:33:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:00.464 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:31:00.464 21:33:49 -- nvmf/common.sh@470 -- # nvmfpid=101243 00:31:00.464 21:33:49 -- nvmf/common.sh@471 -- # waitforlisten 101243 00:31:00.464 21:33:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:00.464 21:33:49 -- common/autotest_common.sh@817 -- # '[' -z 101243 ']' 00:31:00.464 21:33:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.464 21:33:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:00.464 21:33:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.464 21:33:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:00.464 21:33:49 -- common/autotest_common.sh@10 -- # set +x 00:31:00.464 [2024-04-26 21:33:49.601660] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:00.464 [2024-04-26 21:33:49.601724] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.723 [2024-04-26 21:33:49.742214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.723 [2024-04-26 21:33:49.793983] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.723 [2024-04-26 21:33:49.794035] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.723 [2024-04-26 21:33:49.794042] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.723 [2024-04-26 21:33:49.794048] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.723 [2024-04-26 21:33:49.794053] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.723 [2024-04-26 21:33:49.794077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:01.291 21:33:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:01.291 21:33:50 -- common/autotest_common.sh@850 -- # return 0 00:31:01.291 21:33:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:01.291 21:33:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:01.291 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.291 21:33:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.291 21:33:50 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.291 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.291 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.291 [2024-04-26 21:33:50.524971] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.291 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.291 21:33:50 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:01.291 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.291 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.291 [2024-04-26 21:33:50.537075] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:01.291 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.291 21:33:50 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:01.291 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.291 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.554 null0 00:31:01.554 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.554 21:33:50 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:01.554 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.554 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.554 null1 00:31:01.554 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.554 21:33:50 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:01.554 21:33:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.554 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.554 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:01.554 21:33:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.554 21:33:50 -- host/discovery.sh@45 -- # hostpid=101292 00:31:01.554 21:33:50 -- host/discovery.sh@46 -- # waitforlisten 101292 /tmp/host.sock 00:31:01.554 21:33:50 -- common/autotest_common.sh@817 -- # '[' -z 101292 ']' 00:31:01.554 21:33:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:31:01.554 21:33:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:01.554 21:33:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:01.554 21:33:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:01.554 21:33:50 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:01.554 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:31:01.554 [2024-04-26 21:33:50.619089] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:01.554 [2024-04-26 21:33:50.619159] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101292 ] 00:31:01.554 [2024-04-26 21:33:50.756932] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.816 [2024-04-26 21:33:50.811751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.382 21:33:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:02.383 21:33:51 -- common/autotest_common.sh@850 -- # return 0 00:31:02.383 21:33:51 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:02.383 21:33:51 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:02.383 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.383 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.383 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.383 21:33:51 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:02.383 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.383 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.383 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.383 21:33:51 -- host/discovery.sh@72 -- # notify_id=0 00:31:02.383 21:33:51 -- host/discovery.sh@83 -- # get_subsystem_names 00:31:02.383 21:33:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.383 21:33:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.383 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.383 21:33:51 -- host/discovery.sh@59 -- # sort 00:31:02.383 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.383 21:33:51 -- host/discovery.sh@59 -- # xargs 00:31:02.383 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.383 21:33:51 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:02.383 21:33:51 -- host/discovery.sh@84 -- # get_bdev_list 00:31:02.383 21:33:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.383 21:33:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.383 21:33:51 -- host/discovery.sh@55 -- # sort 00:31:02.383 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.383 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.383 21:33:51 -- host/discovery.sh@55 -- # xargs 00:31:02.383 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.659 21:33:51 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:02.659 21:33:51 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:02.659 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.659 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.659 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.659 21:33:51 -- host/discovery.sh@87 -- # get_subsystem_names 00:31:02.659 21:33:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.660 21:33:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.660 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.660 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.660 21:33:51 -- host/discovery.sh@59 -- # sort 00:31:02.660 21:33:51 -- host/discovery.sh@59 -- # xargs 00:31:02.660 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.660 21:33:51 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:02.660 21:33:51 -- host/discovery.sh@88 -- # get_bdev_list 00:31:02.660 21:33:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.660 21:33:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.660 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.660 21:33:51 -- host/discovery.sh@55 -- # sort 00:31:02.660 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.660 21:33:51 -- host/discovery.sh@55 -- # xargs 00:31:02.660 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.660 21:33:51 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:02.660 21:33:51 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:02.660 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.660 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.660 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.660 21:33:51 -- host/discovery.sh@91 -- # get_subsystem_names 00:31:02.660 21:33:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.660 21:33:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.660 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.660 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.660 21:33:51 -- host/discovery.sh@59 -- # sort 00:31:02.660 21:33:51 -- host/discovery.sh@59 -- # xargs 00:31:02.660 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.660 21:33:51 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:02.660 21:33:51 -- host/discovery.sh@92 -- # get_bdev_list 00:31:02.660 21:33:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.660 21:33:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.660 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.660 21:33:51 -- host/discovery.sh@55 -- # sort 00:31:02.660 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.660 21:33:51 -- host/discovery.sh@55 -- # xargs 00:31:02.660 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.660 21:33:51 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:02.660 21:33:51 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:02.660 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.660 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.660 [2024-04-26 21:33:51.890747] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:02.940 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.940 21:33:51 -- host/discovery.sh@97 -- # get_subsystem_names 00:31:02.940 21:33:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.940 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.940 21:33:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.940 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.940 21:33:51 -- host/discovery.sh@59 -- # sort 00:31:02.940 21:33:51 -- host/discovery.sh@59 -- # xargs 00:31:02.940 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.940 21:33:51 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:02.940 21:33:51 -- host/discovery.sh@98 -- # get_bdev_list 00:31:02.940 21:33:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:02.940 21:33:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:02.940 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.940 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.940 21:33:51 -- host/discovery.sh@55 -- # sort 00:31:02.940 21:33:51 -- host/discovery.sh@55 -- # xargs 00:31:02.940 21:33:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.940 21:33:51 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:02.940 21:33:51 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:02.940 21:33:51 -- host/discovery.sh@79 -- # expected_count=0 00:31:02.940 21:33:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:02.940 21:33:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:02.940 21:33:51 -- common/autotest_common.sh@901 -- # local max=10 00:31:02.940 21:33:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:02.940 21:33:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:02.940 21:33:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:02.940 21:33:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:02.940 21:33:51 -- host/discovery.sh@74 -- # jq '. | length' 00:31:02.940 21:33:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.940 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:31:02.940 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.940 21:33:52 -- host/discovery.sh@74 -- # notification_count=0 00:31:02.940 21:33:52 -- host/discovery.sh@75 -- # notify_id=0 00:31:02.940 21:33:52 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:02.940 21:33:52 -- common/autotest_common.sh@904 -- # return 0 00:31:02.940 21:33:52 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:02.940 21:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.940 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:31:02.940 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.940 21:33:52 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.940 21:33:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:02.940 21:33:52 -- common/autotest_common.sh@901 -- # local max=10 00:31:02.940 21:33:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:02.940 21:33:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:02.940 21:33:52 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:02.940 21:33:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:02.940 21:33:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:02.940 21:33:52 -- common/autotest_common.sh@10 -- # set +x 00:31:02.940 21:33:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:02.940 21:33:52 -- host/discovery.sh@59 -- # sort 00:31:02.940 21:33:52 -- host/discovery.sh@59 -- # xargs 00:31:02.940 21:33:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:02.940 21:33:52 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:31:02.940 21:33:52 -- common/autotest_common.sh@906 -- # sleep 1 00:31:03.513 [2024-04-26 21:33:52.546196] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:03.513 [2024-04-26 21:33:52.546241] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:03.513 [2024-04-26 21:33:52.546264] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:03.513 [2024-04-26 21:33:52.632159] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:03.513 [2024-04-26 21:33:52.688016] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:03.513 [2024-04-26 21:33:52.688058] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:04.192 21:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:04.192 21:33:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:04.192 21:33:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:04.192 21:33:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.192 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.192 21:33:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.192 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.192 21:33:53 -- host/discovery.sh@59 -- # sort 00:31:04.192 21:33:53 -- host/discovery.sh@59 -- # xargs 00:31:04.192 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.192 21:33:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.192 21:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:04.192 21:33:53 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:04.192 21:33:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:04.192 21:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:04.192 21:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:04.192 21:33:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:04.192 21:33:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:04.192 21:33:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.192 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.192 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.192 21:33:53 -- host/discovery.sh@55 -- # sort 00:31:04.192 21:33:53 -- host/discovery.sh@55 -- # xargs 00:31:04.192 21:33:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.192 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.192 21:33:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:04.192 21:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:04.192 21:33:53 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:04.192 21:33:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:04.192 21:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:04.192 21:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:31:04.193 21:33:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:04.193 21:33:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:04.193 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.193 21:33:53 -- host/discovery.sh@63 -- # sort -n 00:31:04.193 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.193 21:33:53 -- host/discovery.sh@63 -- # xargs 00:31:04.193 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:31:04.193 21:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:04.193 21:33:53 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:04.193 21:33:53 -- host/discovery.sh@79 -- # expected_count=1 00:31:04.193 21:33:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:04.193 21:33:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:04.193 21:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:04.193 21:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:04.193 21:33:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:04.193 21:33:53 -- host/discovery.sh@74 -- # jq '. | length' 00:31:04.193 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.193 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.193 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.193 21:33:53 -- host/discovery.sh@74 -- # notification_count=1 00:31:04.193 21:33:53 -- host/discovery.sh@75 -- # notify_id=1 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:04.193 21:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:04.193 21:33:53 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:04.193 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.193 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.193 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.193 21:33:53 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:04.193 21:33:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:04.193 21:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:04.193 21:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:04.193 21:33:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.193 21:33:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.193 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.193 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.193 21:33:53 -- host/discovery.sh@55 -- # sort 00:31:04.193 21:33:53 -- host/discovery.sh@55 -- # xargs 00:31:04.193 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:04.193 21:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:04.193 21:33:53 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:04.193 21:33:53 -- host/discovery.sh@79 -- # expected_count=1 00:31:04.193 21:33:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:04.193 21:33:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:04.193 21:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:04.193 21:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:04.193 21:33:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:04.193 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.193 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.193 21:33:53 -- host/discovery.sh@74 -- # jq '. | length' 00:31:04.193 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.193 21:33:53 -- host/discovery.sh@74 -- # notification_count=1 00:31:04.193 21:33:53 -- host/discovery.sh@75 -- # notify_id=2 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:04.193 21:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:04.193 21:33:53 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:04.193 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.193 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.193 [2024-04-26 21:33:53.392441] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:04.193 [2024-04-26 21:33:53.393574] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:04.193 [2024-04-26 21:33:53.393622] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:04.193 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.193 21:33:53 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:04.193 21:33:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:04.193 21:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:04.193 21:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:04.193 21:33:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:04.193 21:33:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:04.193 21:33:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:04.193 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.193 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.193 21:33:53 -- host/discovery.sh@59 -- # sort 00:31:04.193 21:33:53 -- host/discovery.sh@59 -- # xargs 00:31:04.193 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.471 21:33:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:04.471 21:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:04.471 21:33:53 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:04.471 21:33:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:04.471 21:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:04.471 21:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:04.471 21:33:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:04.471 21:33:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:04.471 21:33:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:04.471 21:33:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:04.471 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.471 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.471 21:33:53 -- host/discovery.sh@55 -- # sort 00:31:04.471 21:33:53 -- host/discovery.sh@55 -- # xargs 00:31:04.471 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.471 [2024-04-26 21:33:53.480433] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:04.471 21:33:53 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:04.471 21:33:53 -- common/autotest_common.sh@904 -- # return 0 00:31:04.471 21:33:53 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:04.471 21:33:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:04.471 21:33:53 -- common/autotest_common.sh@901 -- # local max=10 00:31:04.471 21:33:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:04.471 21:33:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:04.471 21:33:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:31:04.471 21:33:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:04.471 21:33:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:04.471 21:33:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:04.471 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:31:04.471 21:33:53 -- host/discovery.sh@63 -- # sort -n 00:31:04.471 21:33:53 -- host/discovery.sh@63 -- # xargs 00:31:04.471 21:33:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:04.471 21:33:53 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:04.471 21:33:53 -- common/autotest_common.sh@906 -- # sleep 1 00:31:04.471 [2024-04-26 21:33:53.537624] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:04.472 [2024-04-26 21:33:53.537655] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:04.472 [2024-04-26 21:33:53.537663] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:05.406 21:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.406 21:33:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:05.406 21:33:54 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:31:05.406 21:33:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:05.406 21:33:54 -- host/discovery.sh@63 -- # sort -n 00:31:05.406 21:33:54 -- host/discovery.sh@63 -- # xargs 00:31:05.406 21:33:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:05.406 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.406 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.406 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.406 21:33:54 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:05.406 21:33:54 -- common/autotest_common.sh@904 -- # return 0 00:31:05.406 21:33:54 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:05.407 21:33:54 -- host/discovery.sh@79 -- # expected_count=0 00:31:05.407 21:33:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.407 21:33:54 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.407 21:33:54 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.407 21:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.407 21:33:54 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.407 21:33:54 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:05.407 21:33:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:05.407 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.407 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.407 21:33:54 -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.407 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.407 21:33:54 -- host/discovery.sh@74 -- # notification_count=0 00:31:05.407 21:33:54 -- host/discovery.sh@75 -- # notify_id=2 00:31:05.407 21:33:54 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:05.407 21:33:54 -- common/autotest_common.sh@904 -- # return 0 00:31:05.407 21:33:54 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.407 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.407 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.407 [2024-04-26 21:33:54.639396] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:05.407 [2024-04-26 21:33:54.639496] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:05.407 [2024-04-26 21:33:54.641868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.407 [2024-04-26 21:33:54.641943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.407 [2024-04-26 21:33:54.641982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.407 [2024-04-26 21:33:54.642015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.407 [2024-04-26 21:33:54.642048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.407 [2024-04-26 21:33:54.642079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.407 [2024-04-26 21:33:54.642101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.407 [2024-04-26 21:33:54.642108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.407 [2024-04-26 21:33:54.642115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102f980 is same with the state(5) to be set 00:31:05.407 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.407 21:33:54 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.407 21:33:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:05.407 21:33:54 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.407 21:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.407 21:33:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:05.407 21:33:54 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:05.407 21:33:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.407 21:33:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.407 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.407 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.407 21:33:54 -- host/discovery.sh@59 -- # xargs 00:31:05.407 21:33:54 -- host/discovery.sh@59 -- # sort 00:31:05.407 [2024-04-26 21:33:54.651812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f980 (9): Bad file descriptor 00:31:05.682 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.682 [2024-04-26 21:33:54.661817] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.682 [2024-04-26 21:33:54.661952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.661990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.662000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102f980 with addr=10.0.0.2, port=4420 00:31:05.682 [2024-04-26 21:33:54.662009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102f980 is same with the state(5) to be set 00:31:05.682 [2024-04-26 21:33:54.662024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f980 (9): Bad file descriptor 00:31:05.682 [2024-04-26 21:33:54.662035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.682 [2024-04-26 21:33:54.662041] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.682 [2024-04-26 21:33:54.662049] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.682 [2024-04-26 21:33:54.662062] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.682 [2024-04-26 21:33:54.671867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.682 [2024-04-26 21:33:54.671979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.672012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.672022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102f980 with addr=10.0.0.2, port=4420 00:31:05.682 [2024-04-26 21:33:54.672031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102f980 is same with the state(5) to be set 00:31:05.682 [2024-04-26 21:33:54.672044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f980 (9): Bad file descriptor 00:31:05.682 [2024-04-26 21:33:54.672054] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.682 [2024-04-26 21:33:54.672060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.682 [2024-04-26 21:33:54.672067] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.682 [2024-04-26 21:33:54.672078] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.682 [2024-04-26 21:33:54.681917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.682 [2024-04-26 21:33:54.682046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.682084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.682096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102f980 with addr=10.0.0.2, port=4420 00:31:05.682 [2024-04-26 21:33:54.682108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102f980 is same with the state(5) to be set 00:31:05.682 [2024-04-26 21:33:54.682127] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f980 (9): Bad file descriptor 00:31:05.682 [2024-04-26 21:33:54.682138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.682 [2024-04-26 21:33:54.682144] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.682 [2024-04-26 21:33:54.682153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.682 [2024-04-26 21:33:54.682166] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.682 [2024-04-26 21:33:54.691977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.682 [2024-04-26 21:33:54.692102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.692138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.692148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102f980 with addr=10.0.0.2, port=4420 00:31:05.682 [2024-04-26 21:33:54.692156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102f980 is same with the state(5) to be set 00:31:05.682 [2024-04-26 21:33:54.692170] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f980 (9): Bad file descriptor 00:31:05.682 [2024-04-26 21:33:54.692181] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.682 [2024-04-26 21:33:54.692188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.682 [2024-04-26 21:33:54.692195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.682 [2024-04-26 21:33:54.692207] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.682 [2024-04-26 21:33:54.702029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.682 [2024-04-26 21:33:54.702136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.702167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.702177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102f980 with addr=10.0.0.2, port=4420 00:31:05.682 [2024-04-26 21:33:54.702185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102f980 is same with the state(5) to be set 00:31:05.682 [2024-04-26 21:33:54.702198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f980 (9): Bad file descriptor 00:31:05.682 [2024-04-26 21:33:54.702209] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.682 [2024-04-26 21:33:54.702215] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.682 [2024-04-26 21:33:54.702222] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.682 [2024-04-26 21:33:54.702234] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.682 21:33:54 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:05.682 21:33:54 -- common/autotest_common.sh@904 -- # return 0 00:31:05.682 21:33:54 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.682 21:33:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:05.682 21:33:54 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.682 21:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.682 21:33:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:05.682 21:33:54 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:05.682 21:33:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.682 21:33:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.682 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.682 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.682 21:33:54 -- host/discovery.sh@55 -- # xargs 00:31:05.682 21:33:54 -- host/discovery.sh@55 -- # sort 00:31:05.682 [2024-04-26 21:33:54.712075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.682 [2024-04-26 21:33:54.712197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.712234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.712244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102f980 with addr=10.0.0.2, port=4420 00:31:05.682 [2024-04-26 21:33:54.712253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102f980 is same with the state(5) to be set 00:31:05.682 [2024-04-26 21:33:54.712267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f980 (9): Bad file descriptor 00:31:05.682 [2024-04-26 21:33:54.712278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.682 [2024-04-26 21:33:54.712284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.682 [2024-04-26 21:33:54.712292] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.682 [2024-04-26 21:33:54.712305] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.682 [2024-04-26 21:33:54.722125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:05.682 [2024-04-26 21:33:54.722267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.722314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.682 [2024-04-26 21:33:54.722328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102f980 with addr=10.0.0.2, port=4420 00:31:05.683 [2024-04-26 21:33:54.722356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102f980 is same with the state(5) to be set 00:31:05.683 [2024-04-26 21:33:54.722376] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102f980 (9): Bad file descriptor 00:31:05.683 [2024-04-26 21:33:54.722392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:05.683 [2024-04-26 21:33:54.722400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:05.683 [2024-04-26 21:33:54.722411] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:05.683 [2024-04-26 21:33:54.722428] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:05.683 [2024-04-26 21:33:54.726529] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:05.683 [2024-04-26 21:33:54.726563] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:05.683 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:05.683 21:33:54 -- common/autotest_common.sh@904 -- # return 0 00:31:05.683 21:33:54 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:05.683 21:33:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:05.683 21:33:54 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.683 21:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:31:05.683 21:33:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:05.683 21:33:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:05.683 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.683 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.683 21:33:54 -- host/discovery.sh@63 -- # xargs 00:31:05.683 21:33:54 -- host/discovery.sh@63 -- # sort -n 00:31:05.683 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:31:05.683 21:33:54 -- common/autotest_common.sh@904 -- # return 0 00:31:05.683 21:33:54 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:05.683 21:33:54 -- host/discovery.sh@79 -- # expected_count=0 00:31:05.683 21:33:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.683 21:33:54 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.683 21:33:54 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.683 21:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:05.683 21:33:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:05.683 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.683 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.683 21:33:54 -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.683 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.683 21:33:54 -- host/discovery.sh@74 -- # notification_count=0 00:31:05.683 21:33:54 -- host/discovery.sh@75 -- # notify_id=2 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:05.683 21:33:54 -- common/autotest_common.sh@904 -- # return 0 00:31:05.683 21:33:54 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:05.683 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.683 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.683 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.683 21:33:54 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:05.683 21:33:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:05.683 21:33:54 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.683 21:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:31:05.683 21:33:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:05.683 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.683 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.683 21:33:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:05.683 21:33:54 -- host/discovery.sh@59 -- # sort 00:31:05.683 21:33:54 -- host/discovery.sh@59 -- # xargs 00:31:05.683 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:31:05.683 21:33:54 -- common/autotest_common.sh@904 -- # return 0 00:31:05.683 21:33:54 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:05.683 21:33:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:05.683 21:33:54 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.683 21:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:05.683 21:33:54 -- common/autotest_common.sh@903 -- # get_bdev_list 00:31:05.683 21:33:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:05.683 21:33:54 -- host/discovery.sh@55 -- # sort 00:31:05.683 21:33:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:05.683 21:33:54 -- host/discovery.sh@55 -- # xargs 00:31:05.683 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.683 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.683 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.941 21:33:54 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:31:05.941 21:33:54 -- common/autotest_common.sh@904 -- # return 0 00:31:05.941 21:33:54 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:05.941 21:33:54 -- host/discovery.sh@79 -- # expected_count=2 00:31:05.941 21:33:54 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:05.941 21:33:54 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:05.941 21:33:54 -- common/autotest_common.sh@901 -- # local max=10 00:31:05.941 21:33:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:31:05.941 21:33:54 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:05.941 21:33:54 -- common/autotest_common.sh@903 -- # get_notification_count 00:31:05.941 21:33:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:05.941 21:33:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.941 21:33:54 -- common/autotest_common.sh@10 -- # set +x 00:31:05.941 21:33:54 -- host/discovery.sh@74 -- # jq '. | length' 00:31:05.941 21:33:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.941 21:33:55 -- host/discovery.sh@74 -- # notification_count=2 00:31:05.941 21:33:55 -- host/discovery.sh@75 -- # notify_id=4 00:31:05.941 21:33:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:31:05.941 21:33:55 -- common/autotest_common.sh@904 -- # return 0 00:31:05.941 21:33:55 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:05.941 21:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.941 21:33:55 -- common/autotest_common.sh@10 -- # set +x 00:31:06.873 [2024-04-26 21:33:56.020014] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:06.873 [2024-04-26 21:33:56.020139] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:06.873 [2024-04-26 21:33:56.020176] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:06.873 [2024-04-26 21:33:56.105992] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:07.132 [2024-04-26 21:33:56.165287] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:07.132 [2024-04-26 21:33:56.165473] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:07.132 21:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.132 21:33:56 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:07.132 21:33:56 -- common/autotest_common.sh@638 -- # local es=0 00:31:07.132 21:33:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:07.132 21:33:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:07.132 21:33:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:07.132 21:33:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:07.132 21:33:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:07.132 21:33:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:07.132 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.132 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.132 2024/04/26 21:33:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:31:07.132 request: 00:31:07.132 { 00:31:07.132 "method": "bdev_nvme_start_discovery", 00:31:07.132 "params": { 00:31:07.132 "name": "nvme", 00:31:07.132 "trtype": "tcp", 00:31:07.132 "traddr": "10.0.0.2", 00:31:07.132 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:07.132 "adrfam": "ipv4", 00:31:07.132 "trsvcid": "8009", 00:31:07.132 "wait_for_attach": true 00:31:07.132 } 00:31:07.132 } 00:31:07.132 Got JSON-RPC error response 00:31:07.132 GoRPCClient: error on JSON-RPC call 00:31:07.132 21:33:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:07.132 21:33:56 -- common/autotest_common.sh@641 -- # es=1 00:31:07.132 21:33:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:07.132 21:33:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:07.132 21:33:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:07.132 21:33:56 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:07.132 21:33:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:07.132 21:33:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:07.132 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.132 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.132 21:33:56 -- host/discovery.sh@67 -- # sort 00:31:07.132 21:33:56 -- host/discovery.sh@67 -- # xargs 00:31:07.132 21:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.132 21:33:56 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:07.132 21:33:56 -- host/discovery.sh@146 -- # get_bdev_list 00:31:07.132 21:33:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:07.132 21:33:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:07.132 21:33:56 -- host/discovery.sh@55 -- # sort 00:31:07.132 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.132 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.132 21:33:56 -- host/discovery.sh@55 -- # xargs 00:31:07.132 21:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.132 21:33:56 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:07.132 21:33:56 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:07.132 21:33:56 -- common/autotest_common.sh@638 -- # local es=0 00:31:07.132 21:33:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:07.132 21:33:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:07.132 21:33:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:07.132 21:33:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:07.132 21:33:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:07.132 21:33:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:07.132 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.132 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.132 2024/04/26 21:33:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:31:07.132 request: 00:31:07.132 { 00:31:07.132 "method": "bdev_nvme_start_discovery", 00:31:07.132 "params": { 00:31:07.132 "name": "nvme_second", 00:31:07.132 "trtype": "tcp", 00:31:07.132 "traddr": "10.0.0.2", 00:31:07.132 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:07.132 "adrfam": "ipv4", 00:31:07.132 "trsvcid": "8009", 00:31:07.132 "wait_for_attach": true 00:31:07.132 } 00:31:07.132 } 00:31:07.132 Got JSON-RPC error response 00:31:07.132 GoRPCClient: error on JSON-RPC call 00:31:07.132 21:33:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:07.132 21:33:56 -- common/autotest_common.sh@641 -- # es=1 00:31:07.132 21:33:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:07.132 21:33:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:07.132 21:33:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:07.132 21:33:56 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:07.132 21:33:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:07.132 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.132 21:33:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:07.132 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.132 21:33:56 -- host/discovery.sh@67 -- # sort 00:31:07.132 21:33:56 -- host/discovery.sh@67 -- # xargs 00:31:07.132 21:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.132 21:33:56 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:07.132 21:33:56 -- host/discovery.sh@152 -- # get_bdev_list 00:31:07.132 21:33:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:07.132 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.132 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:31:07.132 21:33:56 -- host/discovery.sh@55 -- # sort 00:31:07.132 21:33:56 -- host/discovery.sh@55 -- # xargs 00:31:07.132 21:33:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:07.132 21:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:07.391 21:33:56 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:07.391 21:33:56 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:07.391 21:33:56 -- common/autotest_common.sh@638 -- # local es=0 00:31:07.391 21:33:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:07.391 21:33:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:07.391 21:33:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:07.391 21:33:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:07.391 21:33:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:07.391 21:33:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:07.391 21:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:07.391 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:31:08.326 [2024-04-26 21:33:57.408792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.326 [2024-04-26 21:33:57.408983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:08.326 [2024-04-26 21:33:57.409018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x120cdd0 with addr=10.0.0.2, port=8010 00:31:08.326 [2024-04-26 21:33:57.409074] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:08.326 [2024-04-26 21:33:57.409105] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:08.326 [2024-04-26 21:33:57.409137] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:09.261 [2024-04-26 21:33:58.406860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-04-26 21:33:58.407046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:09.261 [2024-04-26 21:33:58.407081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x104be60 with addr=10.0.0.2, port=8010 00:31:09.261 [2024-04-26 21:33:58.407102] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:09.261 [2024-04-26 21:33:58.407110] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:09.261 [2024-04-26 21:33:58.407118] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:10.197 [2024-04-26 21:33:59.404786] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:10.198 2024/04/26 21:33:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:31:10.198 request: 00:31:10.198 { 00:31:10.198 "method": "bdev_nvme_start_discovery", 00:31:10.198 "params": { 00:31:10.198 "name": "nvme_second", 00:31:10.198 "trtype": "tcp", 00:31:10.198 "traddr": "10.0.0.2", 00:31:10.198 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:10.198 "adrfam": "ipv4", 00:31:10.198 "trsvcid": "8010", 00:31:10.198 "attach_timeout_ms": 3000 00:31:10.198 } 00:31:10.198 } 00:31:10.198 Got JSON-RPC error response 00:31:10.198 GoRPCClient: error on JSON-RPC call 00:31:10.198 21:33:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:10.198 21:33:59 -- common/autotest_common.sh@641 -- # es=1 00:31:10.198 21:33:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:10.198 21:33:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:10.198 21:33:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:10.198 21:33:59 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:10.198 21:33:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:10.198 21:33:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:10.198 21:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:10.198 21:33:59 -- host/discovery.sh@67 -- # sort 00:31:10.198 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:31:10.198 21:33:59 -- host/discovery.sh@67 -- # xargs 00:31:10.198 21:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:10.473 21:33:59 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:10.473 21:33:59 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:10.473 21:33:59 -- host/discovery.sh@161 -- # kill 101292 00:31:10.473 21:33:59 -- host/discovery.sh@162 -- # nvmftestfini 00:31:10.473 21:33:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:10.473 21:33:59 -- nvmf/common.sh@117 -- # sync 00:31:10.473 21:33:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:10.473 21:33:59 -- nvmf/common.sh@120 -- # set +e 00:31:10.473 21:33:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:10.473 21:33:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:10.473 rmmod nvme_tcp 00:31:10.473 rmmod nvme_fabrics 00:31:10.473 rmmod nvme_keyring 00:31:10.473 21:33:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:10.473 21:33:59 -- nvmf/common.sh@124 -- # set -e 00:31:10.473 21:33:59 -- nvmf/common.sh@125 -- # return 0 00:31:10.473 21:33:59 -- nvmf/common.sh@478 -- # '[' -n 101243 ']' 00:31:10.473 21:33:59 -- nvmf/common.sh@479 -- # killprocess 101243 00:31:10.473 21:33:59 -- common/autotest_common.sh@936 -- # '[' -z 101243 ']' 00:31:10.473 21:33:59 -- common/autotest_common.sh@940 -- # kill -0 101243 00:31:10.473 21:33:59 -- common/autotest_common.sh@941 -- # uname 00:31:10.473 21:33:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:10.473 21:33:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101243 00:31:10.473 killing process with pid 101243 00:31:10.473 21:33:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:10.473 21:33:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:10.473 21:33:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101243' 00:31:10.473 21:33:59 -- common/autotest_common.sh@955 -- # kill 101243 00:31:10.473 21:33:59 -- common/autotest_common.sh@960 -- # wait 101243 00:31:10.754 21:33:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:10.754 21:33:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:10.754 21:33:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:10.754 21:33:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:10.754 21:33:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:10.754 21:33:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.754 21:33:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:10.754 21:33:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.754 21:33:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:10.754 00:31:10.754 real 0m10.772s 00:31:10.754 user 0m20.942s 00:31:10.754 sys 0m1.435s 00:31:10.754 ************************************ 00:31:10.754 END TEST nvmf_discovery 00:31:10.754 ************************************ 00:31:10.754 21:33:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:10.754 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:31:10.754 21:33:59 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:10.754 21:33:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:10.754 21:33:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:10.754 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:31:10.754 ************************************ 00:31:10.754 START TEST nvmf_discovery_remove_ifc 00:31:10.754 ************************************ 00:31:10.754 21:33:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:11.032 * Looking for test storage... 00:31:11.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:11.032 21:34:00 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:11.032 21:34:00 -- nvmf/common.sh@7 -- # uname -s 00:31:11.032 21:34:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.032 21:34:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.032 21:34:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.032 21:34:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.032 21:34:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.032 21:34:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.032 21:34:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.032 21:34:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.032 21:34:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.032 21:34:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.032 21:34:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:31:11.032 21:34:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:31:11.032 21:34:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.032 21:34:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.032 21:34:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:11.032 21:34:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.032 21:34:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:11.032 21:34:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.032 21:34:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.032 21:34:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.032 21:34:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.032 21:34:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.032 21:34:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.032 21:34:00 -- paths/export.sh@5 -- # export PATH 00:31:11.032 21:34:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.032 21:34:00 -- nvmf/common.sh@47 -- # : 0 00:31:11.032 21:34:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:11.032 21:34:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:11.032 21:34:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.032 21:34:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.032 21:34:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.032 21:34:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:11.032 21:34:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:11.032 21:34:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:11.032 21:34:00 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:11.032 21:34:00 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:11.032 21:34:00 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:11.032 21:34:00 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:11.032 21:34:00 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:11.032 21:34:00 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:11.032 21:34:00 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:11.032 21:34:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:11.032 21:34:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.032 21:34:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:11.032 21:34:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:11.032 21:34:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:11.032 21:34:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.032 21:34:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:11.032 21:34:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.032 21:34:00 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:11.032 21:34:00 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:11.032 21:34:00 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:11.032 21:34:00 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:11.032 21:34:00 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:11.032 21:34:00 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:11.032 21:34:00 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.032 21:34:00 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.032 21:34:00 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:11.032 21:34:00 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:11.032 21:34:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:11.032 21:34:00 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:11.032 21:34:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:11.032 21:34:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.032 21:34:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:11.032 21:34:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:11.032 21:34:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:11.032 21:34:00 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:11.032 21:34:00 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:11.032 21:34:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:11.032 Cannot find device "nvmf_tgt_br" 00:31:11.032 21:34:00 -- nvmf/common.sh@155 -- # true 00:31:11.032 21:34:00 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:11.032 Cannot find device "nvmf_tgt_br2" 00:31:11.032 21:34:00 -- nvmf/common.sh@156 -- # true 00:31:11.032 21:34:00 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:11.032 21:34:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:11.032 Cannot find device "nvmf_tgt_br" 00:31:11.032 21:34:00 -- nvmf/common.sh@158 -- # true 00:31:11.032 21:34:00 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:11.032 Cannot find device "nvmf_tgt_br2" 00:31:11.032 21:34:00 -- nvmf/common.sh@159 -- # true 00:31:11.032 21:34:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:11.032 21:34:00 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:11.032 21:34:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:11.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:11.032 21:34:00 -- nvmf/common.sh@162 -- # true 00:31:11.032 21:34:00 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:11.032 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:11.032 21:34:00 -- nvmf/common.sh@163 -- # true 00:31:11.032 21:34:00 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:11.032 21:34:00 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:11.032 21:34:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:11.032 21:34:00 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:11.032 21:34:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:11.032 21:34:00 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:11.032 21:34:00 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:11.032 21:34:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:11.032 21:34:00 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:11.032 21:34:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:11.032 21:34:00 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:11.032 21:34:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:11.032 21:34:00 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:11.032 21:34:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:11.032 21:34:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:11.032 21:34:00 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:11.308 21:34:00 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:11.308 21:34:00 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:11.309 21:34:00 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:11.309 21:34:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:11.309 21:34:00 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:11.309 21:34:00 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:11.309 21:34:00 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:11.309 21:34:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:11.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:31:11.309 00:31:11.309 --- 10.0.0.2 ping statistics --- 00:31:11.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.309 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:31:11.309 21:34:00 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:11.309 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:11.309 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:31:11.309 00:31:11.309 --- 10.0.0.3 ping statistics --- 00:31:11.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.309 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:31:11.309 21:34:00 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:11.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:31:11.309 00:31:11.309 --- 10.0.0.1 ping statistics --- 00:31:11.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.309 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:31:11.309 21:34:00 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.309 21:34:00 -- nvmf/common.sh@422 -- # return 0 00:31:11.309 21:34:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:11.309 21:34:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.309 21:34:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:11.309 21:34:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:11.309 21:34:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.309 21:34:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:11.309 21:34:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:11.309 21:34:00 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:11.309 21:34:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:11.309 21:34:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:11.309 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:31:11.309 21:34:00 -- nvmf/common.sh@470 -- # nvmfpid=101786 00:31:11.309 21:34:00 -- nvmf/common.sh@471 -- # waitforlisten 101786 00:31:11.309 21:34:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:11.309 21:34:00 -- common/autotest_common.sh@817 -- # '[' -z 101786 ']' 00:31:11.309 21:34:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.309 21:34:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:11.309 21:34:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.309 21:34:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:11.309 21:34:00 -- common/autotest_common.sh@10 -- # set +x 00:31:11.309 [2024-04-26 21:34:00.404024] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:11.309 [2024-04-26 21:34:00.404125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.596 [2024-04-26 21:34:00.553447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.596 [2024-04-26 21:34:00.606405] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.596 [2024-04-26 21:34:00.606460] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.596 [2024-04-26 21:34:00.606469] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.596 [2024-04-26 21:34:00.606474] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.596 [2024-04-26 21:34:00.606479] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.596 [2024-04-26 21:34:00.606504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.170 21:34:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:12.170 21:34:01 -- common/autotest_common.sh@850 -- # return 0 00:31:12.170 21:34:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:12.170 21:34:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:12.170 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:31:12.170 21:34:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.170 21:34:01 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:12.170 21:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:12.170 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:31:12.170 [2024-04-26 21:34:01.370253] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.170 [2024-04-26 21:34:01.378384] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:12.170 null0 00:31:12.170 [2024-04-26 21:34:01.410246] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.436 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:12.436 21:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:12.436 21:34:01 -- host/discovery_remove_ifc.sh@59 -- # hostpid=101832 00:31:12.436 21:34:01 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 101832 /tmp/host.sock 00:31:12.436 21:34:01 -- common/autotest_common.sh@817 -- # '[' -z 101832 ']' 00:31:12.436 21:34:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:31:12.436 21:34:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:12.436 21:34:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:12.436 21:34:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:12.436 21:34:01 -- common/autotest_common.sh@10 -- # set +x 00:31:12.436 21:34:01 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:12.436 [2024-04-26 21:34:01.488596] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:12.436 [2024-04-26 21:34:01.489166] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101832 ] 00:31:12.436 [2024-04-26 21:34:01.622748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.720 [2024-04-26 21:34:01.697066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.329 21:34:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:13.329 21:34:02 -- common/autotest_common.sh@850 -- # return 0 00:31:13.329 21:34:02 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:13.329 21:34:02 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:13.329 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.329 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:31:13.329 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.329 21:34:02 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:13.330 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.330 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:31:13.330 21:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:13.330 21:34:02 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:13.330 21:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:13.330 21:34:02 -- common/autotest_common.sh@10 -- # set +x 00:31:14.281 [2024-04-26 21:34:03.519106] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:14.281 [2024-04-26 21:34:03.519151] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:14.281 [2024-04-26 21:34:03.519170] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:14.540 [2024-04-26 21:34:03.605077] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:14.540 [2024-04-26 21:34:03.661567] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:14.540 [2024-04-26 21:34:03.661681] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:14.540 [2024-04-26 21:34:03.661710] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:14.540 [2024-04-26 21:34:03.661743] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:14.540 [2024-04-26 21:34:03.661787] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:14.540 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.540 [2024-04-26 21:34:03.667971] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xea2ed0 was disconnected and freed. delete nvme_qpair. 00:31:14.540 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.540 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:14.540 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:14.540 21:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:14.540 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:14.540 21:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:14.540 21:34:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:15.913 21:34:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:15.913 21:34:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.913 21:34:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:15.913 21:34:04 -- common/autotest_common.sh@10 -- # set +x 00:31:15.913 21:34:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:15.913 21:34:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:15.913 21:34:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:15.913 21:34:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:15.913 21:34:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:15.913 21:34:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:16.850 21:34:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:16.850 21:34:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:16.850 21:34:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.850 21:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:16.850 21:34:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:16.850 21:34:05 -- common/autotest_common.sh@10 -- # set +x 00:31:16.850 21:34:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:16.850 21:34:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:16.850 21:34:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:16.850 21:34:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:17.786 21:34:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:17.786 21:34:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.786 21:34:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:17.786 21:34:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:17.786 21:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.786 21:34:06 -- common/autotest_common.sh@10 -- # set +x 00:31:17.786 21:34:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:17.786 21:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.786 21:34:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:17.786 21:34:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:18.724 21:34:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:18.724 21:34:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.724 21:34:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:18.724 21:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:18.724 21:34:07 -- common/autotest_common.sh@10 -- # set +x 00:31:18.724 21:34:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:18.724 21:34:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:18.983 21:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:18.983 21:34:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:18.983 21:34:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.914 21:34:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:19.915 21:34:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.915 21:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.915 21:34:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:19.915 21:34:09 -- common/autotest_common.sh@10 -- # set +x 00:31:19.915 21:34:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:19.915 21:34:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:19.915 21:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.915 21:34:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:19.915 21:34:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:19.915 [2024-04-26 21:34:09.078413] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:19.915 [2024-04-26 21:34:09.078504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.915 [2024-04-26 21:34:09.078517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.915 [2024-04-26 21:34:09.078528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.915 [2024-04-26 21:34:09.078535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.915 [2024-04-26 21:34:09.078543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.915 [2024-04-26 21:34:09.078550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.915 [2024-04-26 21:34:09.078558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.915 [2024-04-26 21:34:09.078564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.915 [2024-04-26 21:34:09.078571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.915 [2024-04-26 21:34:09.078578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.915 [2024-04-26 21:34:09.078584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe69460 is same with the state(5) to be set 00:31:19.915 [2024-04-26 21:34:09.088385] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe69460 (9): Bad file descriptor 00:31:19.915 [2024-04-26 21:34:09.098396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:20.845 21:34:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:20.845 21:34:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:20.845 21:34:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:20.845 21:34:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:20.845 21:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.845 21:34:10 -- common/autotest_common.sh@10 -- # set +x 00:31:20.845 21:34:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:21.103 [2024-04-26 21:34:10.108399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:22.036 [2024-04-26 21:34:11.132420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:22.036 [2024-04-26 21:34:11.132578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe69460 with addr=10.0.0.2, port=4420 00:31:22.036 [2024-04-26 21:34:11.132620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe69460 is same with the state(5) to be set 00:31:22.036 [2024-04-26 21:34:11.133911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe69460 (9): Bad file descriptor 00:31:22.036 [2024-04-26 21:34:11.134005] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:22.036 [2024-04-26 21:34:11.134077] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:22.036 [2024-04-26 21:34:11.134173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.036 [2024-04-26 21:34:11.134215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.036 [2024-04-26 21:34:11.134246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.036 [2024-04-26 21:34:11.134279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.036 [2024-04-26 21:34:11.134304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.036 [2024-04-26 21:34:11.134372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.036 [2024-04-26 21:34:11.134423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.036 [2024-04-26 21:34:11.134468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.036 [2024-04-26 21:34:11.134516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:22.036 [2024-04-26 21:34:11.134565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:22.036 [2024-04-26 21:34:11.134599] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:22.036 [2024-04-26 21:34:11.134638] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe69870 (9): Bad file descriptor 00:31:22.036 [2024-04-26 21:34:11.135035] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:22.036 [2024-04-26 21:34:11.135079] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:22.036 21:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:22.036 21:34:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:22.036 21:34:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:22.972 21:34:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:22.972 21:34:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.972 21:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:22.972 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:31:22.972 21:34:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:22.972 21:34:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:22.972 21:34:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:22.972 21:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:22.972 21:34:12 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:22.972 21:34:12 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:22.972 21:34:12 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:22.972 21:34:12 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:23.230 21:34:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:23.230 21:34:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:23.230 21:34:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:23.230 21:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:23.230 21:34:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:23.230 21:34:12 -- common/autotest_common.sh@10 -- # set +x 00:31:23.230 21:34:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:23.230 21:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:23.230 21:34:12 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:23.230 21:34:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:24.178 [2024-04-26 21:34:13.142207] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:24.178 [2024-04-26 21:34:13.142255] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:24.178 [2024-04-26 21:34:13.142275] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:24.178 [2024-04-26 21:34:13.229160] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:24.178 21:34:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:24.178 21:34:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:24.178 21:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:24.178 21:34:13 -- common/autotest_common.sh@10 -- # set +x 00:31:24.178 21:34:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:24.178 21:34:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:24.178 21:34:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:24.178 [2024-04-26 21:34:13.284170] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:24.178 [2024-04-26 21:34:13.284236] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:24.178 [2024-04-26 21:34:13.284256] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:24.178 [2024-04-26 21:34:13.284275] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:24.178 [2024-04-26 21:34:13.284284] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:24.178 [2024-04-26 21:34:13.290774] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe59920 was disconnected and freed. delete nvme_qpair. 00:31:24.178 21:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:24.178 21:34:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:24.178 21:34:13 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:24.178 21:34:13 -- host/discovery_remove_ifc.sh@90 -- # killprocess 101832 00:31:24.178 21:34:13 -- common/autotest_common.sh@936 -- # '[' -z 101832 ']' 00:31:24.178 21:34:13 -- common/autotest_common.sh@940 -- # kill -0 101832 00:31:24.178 21:34:13 -- common/autotest_common.sh@941 -- # uname 00:31:24.178 21:34:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:24.178 21:34:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101832 00:31:24.178 killing process with pid 101832 00:31:24.178 21:34:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:24.178 21:34:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:24.178 21:34:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101832' 00:31:24.178 21:34:13 -- common/autotest_common.sh@955 -- # kill 101832 00:31:24.178 21:34:13 -- common/autotest_common.sh@960 -- # wait 101832 00:31:24.437 21:34:13 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:24.437 21:34:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:24.437 21:34:13 -- nvmf/common.sh@117 -- # sync 00:31:24.752 21:34:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:24.752 21:34:13 -- nvmf/common.sh@120 -- # set +e 00:31:24.752 21:34:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:24.752 21:34:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:24.752 rmmod nvme_tcp 00:31:24.752 rmmod nvme_fabrics 00:31:24.752 rmmod nvme_keyring 00:31:24.752 21:34:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:24.752 21:34:13 -- nvmf/common.sh@124 -- # set -e 00:31:24.752 21:34:13 -- nvmf/common.sh@125 -- # return 0 00:31:24.752 21:34:13 -- nvmf/common.sh@478 -- # '[' -n 101786 ']' 00:31:24.752 21:34:13 -- nvmf/common.sh@479 -- # killprocess 101786 00:31:24.752 21:34:13 -- common/autotest_common.sh@936 -- # '[' -z 101786 ']' 00:31:24.752 21:34:13 -- common/autotest_common.sh@940 -- # kill -0 101786 00:31:24.752 21:34:13 -- common/autotest_common.sh@941 -- # uname 00:31:24.752 21:34:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:24.752 21:34:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101786 00:31:24.752 killing process with pid 101786 00:31:24.752 21:34:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:24.752 21:34:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:24.752 21:34:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101786' 00:31:24.752 21:34:13 -- common/autotest_common.sh@955 -- # kill 101786 00:31:24.752 21:34:13 -- common/autotest_common.sh@960 -- # wait 101786 00:31:25.010 21:34:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:25.010 21:34:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:25.010 21:34:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:25.010 21:34:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:25.010 21:34:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:25.010 21:34:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.010 21:34:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.010 21:34:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.010 21:34:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:25.010 00:31:25.010 real 0m14.082s 00:31:25.010 user 0m24.221s 00:31:25.010 sys 0m1.387s 00:31:25.010 21:34:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:25.010 21:34:14 -- common/autotest_common.sh@10 -- # set +x 00:31:25.010 ************************************ 00:31:25.010 END TEST nvmf_discovery_remove_ifc 00:31:25.010 ************************************ 00:31:25.010 21:34:14 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:25.010 21:34:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:25.010 21:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:25.010 21:34:14 -- common/autotest_common.sh@10 -- # set +x 00:31:25.010 ************************************ 00:31:25.010 START TEST nvmf_identify_kernel_target 00:31:25.010 ************************************ 00:31:25.010 21:34:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:25.010 * Looking for test storage... 00:31:25.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:25.011 21:34:14 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:25.011 21:34:14 -- nvmf/common.sh@7 -- # uname -s 00:31:25.011 21:34:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.011 21:34:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.011 21:34:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.011 21:34:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.011 21:34:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.011 21:34:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.011 21:34:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.011 21:34:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.011 21:34:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.011 21:34:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.270 21:34:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:31:25.270 21:34:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:31:25.270 21:34:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.270 21:34:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.270 21:34:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:25.270 21:34:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.270 21:34:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:25.270 21:34:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.270 21:34:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.270 21:34:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.270 21:34:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.270 21:34:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.270 21:34:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.270 21:34:14 -- paths/export.sh@5 -- # export PATH 00:31:25.270 21:34:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.270 21:34:14 -- nvmf/common.sh@47 -- # : 0 00:31:25.270 21:34:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:25.270 21:34:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:25.270 21:34:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.270 21:34:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.270 21:34:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.270 21:34:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:25.270 21:34:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:25.270 21:34:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:25.270 21:34:14 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:25.270 21:34:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:25.270 21:34:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.270 21:34:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:25.270 21:34:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:25.270 21:34:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:25.270 21:34:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.270 21:34:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:25.270 21:34:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.270 21:34:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:25.270 21:34:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:25.270 21:34:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:25.270 21:34:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:25.270 21:34:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:25.270 21:34:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:25.270 21:34:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.270 21:34:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.270 21:34:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:25.270 21:34:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:25.270 21:34:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:25.270 21:34:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:25.270 21:34:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:25.271 21:34:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.271 21:34:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:25.271 21:34:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:25.271 21:34:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:25.271 21:34:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:25.271 21:34:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:25.271 21:34:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:25.271 Cannot find device "nvmf_tgt_br" 00:31:25.271 21:34:14 -- nvmf/common.sh@155 -- # true 00:31:25.271 21:34:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:25.271 Cannot find device "nvmf_tgt_br2" 00:31:25.271 21:34:14 -- nvmf/common.sh@156 -- # true 00:31:25.271 21:34:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:25.271 21:34:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:25.271 Cannot find device "nvmf_tgt_br" 00:31:25.271 21:34:14 -- nvmf/common.sh@158 -- # true 00:31:25.271 21:34:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:25.271 Cannot find device "nvmf_tgt_br2" 00:31:25.271 21:34:14 -- nvmf/common.sh@159 -- # true 00:31:25.271 21:34:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:25.271 21:34:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:25.271 21:34:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:25.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:25.271 21:34:14 -- nvmf/common.sh@162 -- # true 00:31:25.271 21:34:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:25.271 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:25.271 21:34:14 -- nvmf/common.sh@163 -- # true 00:31:25.271 21:34:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:25.271 21:34:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:25.271 21:34:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:25.271 21:34:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:25.271 21:34:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:25.271 21:34:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:25.271 21:34:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:25.271 21:34:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:25.271 21:34:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:25.271 21:34:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:25.271 21:34:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:25.271 21:34:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:25.271 21:34:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:25.271 21:34:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:25.532 21:34:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:25.532 21:34:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:25.532 21:34:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:25.532 21:34:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:25.532 21:34:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:25.532 21:34:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:25.532 21:34:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:25.532 21:34:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:25.532 21:34:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:25.532 21:34:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:25.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:31:25.532 00:31:25.532 --- 10.0.0.2 ping statistics --- 00:31:25.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.532 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:31:25.532 21:34:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:25.532 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:25.532 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.144 ms 00:31:25.532 00:31:25.532 --- 10.0.0.3 ping statistics --- 00:31:25.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.532 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:31:25.532 21:34:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:25.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:31:25.532 00:31:25.532 --- 10.0.0.1 ping statistics --- 00:31:25.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.532 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:31:25.532 21:34:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.532 21:34:14 -- nvmf/common.sh@422 -- # return 0 00:31:25.532 21:34:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:25.532 21:34:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.532 21:34:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:25.532 21:34:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:25.532 21:34:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.532 21:34:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:25.532 21:34:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:25.532 21:34:14 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:25.532 21:34:14 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:25.532 21:34:14 -- nvmf/common.sh@717 -- # local ip 00:31:25.532 21:34:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:25.532 21:34:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:25.532 21:34:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.532 21:34:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.532 21:34:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:25.532 21:34:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.532 21:34:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:25.532 21:34:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:25.532 21:34:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:25.532 21:34:14 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:25.532 21:34:14 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:25.532 21:34:14 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:25.532 21:34:14 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:31:25.532 21:34:14 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:25.532 21:34:14 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:25.532 21:34:14 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:25.532 21:34:14 -- nvmf/common.sh@628 -- # local block nvme 00:31:25.532 21:34:14 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:31:25.532 21:34:14 -- nvmf/common.sh@631 -- # modprobe nvmet 00:31:25.532 21:34:14 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:25.532 21:34:14 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:25.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:26.051 Waiting for block devices as requested 00:31:26.051 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:26.051 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:26.051 21:34:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:26.051 21:34:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:26.051 21:34:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:31:26.051 21:34:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:26.051 21:34:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:26.051 21:34:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:26.051 21:34:15 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:31:26.051 21:34:15 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:26.051 21:34:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:26.315 No valid GPT data, bailing 00:31:26.315 21:34:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:26.315 21:34:15 -- scripts/common.sh@391 -- # pt= 00:31:26.315 21:34:15 -- scripts/common.sh@392 -- # return 1 00:31:26.315 21:34:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:31:26.315 21:34:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:26.315 21:34:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:26.315 21:34:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:31:26.315 21:34:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:31:26.315 21:34:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:26.315 21:34:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:26.315 21:34:15 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:31:26.315 21:34:15 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:31:26.315 21:34:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:26.315 No valid GPT data, bailing 00:31:26.315 21:34:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:26.315 21:34:15 -- scripts/common.sh@391 -- # pt= 00:31:26.315 21:34:15 -- scripts/common.sh@392 -- # return 1 00:31:26.315 21:34:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:31:26.315 21:34:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:26.315 21:34:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:26.315 21:34:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:31:26.315 21:34:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:31:26.315 21:34:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:26.315 21:34:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:26.315 21:34:15 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:31:26.315 21:34:15 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:31:26.315 21:34:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:26.315 No valid GPT data, bailing 00:31:26.315 21:34:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:26.315 21:34:15 -- scripts/common.sh@391 -- # pt= 00:31:26.315 21:34:15 -- scripts/common.sh@392 -- # return 1 00:31:26.315 21:34:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:31:26.315 21:34:15 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:26.315 21:34:15 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:26.315 21:34:15 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:31:26.315 21:34:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:31:26.315 21:34:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:26.315 21:34:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:26.315 21:34:15 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:31:26.315 21:34:15 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:31:26.315 21:34:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:26.315 No valid GPT data, bailing 00:31:26.315 21:34:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:26.592 21:34:15 -- scripts/common.sh@391 -- # pt= 00:31:26.592 21:34:15 -- scripts/common.sh@392 -- # return 1 00:31:26.592 21:34:15 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:31:26.592 21:34:15 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:31:26.592 21:34:15 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:26.592 21:34:15 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:26.592 21:34:15 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:26.592 21:34:15 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:26.592 21:34:15 -- nvmf/common.sh@656 -- # echo 1 00:31:26.592 21:34:15 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:31:26.592 21:34:15 -- nvmf/common.sh@658 -- # echo 1 00:31:26.592 21:34:15 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:31:26.592 21:34:15 -- nvmf/common.sh@661 -- # echo tcp 00:31:26.592 21:34:15 -- nvmf/common.sh@662 -- # echo 4420 00:31:26.592 21:34:15 -- nvmf/common.sh@663 -- # echo ipv4 00:31:26.592 21:34:15 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:26.592 21:34:15 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -a 10.0.0.1 -t tcp -s 4420 00:31:26.592 00:31:26.592 Discovery Log Number of Records 2, Generation counter 2 00:31:26.592 =====Discovery Log Entry 0====== 00:31:26.592 trtype: tcp 00:31:26.592 adrfam: ipv4 00:31:26.592 subtype: current discovery subsystem 00:31:26.592 treq: not specified, sq flow control disable supported 00:31:26.592 portid: 1 00:31:26.592 trsvcid: 4420 00:31:26.592 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:26.592 traddr: 10.0.0.1 00:31:26.592 eflags: none 00:31:26.592 sectype: none 00:31:26.592 =====Discovery Log Entry 1====== 00:31:26.592 trtype: tcp 00:31:26.592 adrfam: ipv4 00:31:26.592 subtype: nvme subsystem 00:31:26.592 treq: not specified, sq flow control disable supported 00:31:26.592 portid: 1 00:31:26.592 trsvcid: 4420 00:31:26.592 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:26.592 traddr: 10.0.0.1 00:31:26.592 eflags: none 00:31:26.592 sectype: none 00:31:26.592 21:34:15 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:26.592 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:26.592 ===================================================== 00:31:26.592 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:26.592 ===================================================== 00:31:26.592 Controller Capabilities/Features 00:31:26.592 ================================ 00:31:26.592 Vendor ID: 0000 00:31:26.592 Subsystem Vendor ID: 0000 00:31:26.592 Serial Number: 1b6091c2b6e10c5b6b4e 00:31:26.592 Model Number: Linux 00:31:26.592 Firmware Version: 6.7.0-68 00:31:26.592 Recommended Arb Burst: 0 00:31:26.592 IEEE OUI Identifier: 00 00 00 00:31:26.592 Multi-path I/O 00:31:26.592 May have multiple subsystem ports: No 00:31:26.592 May have multiple controllers: No 00:31:26.592 Associated with SR-IOV VF: No 00:31:26.592 Max Data Transfer Size: Unlimited 00:31:26.592 Max Number of Namespaces: 0 00:31:26.592 Max Number of I/O Queues: 1024 00:31:26.592 NVMe Specification Version (VS): 1.3 00:31:26.592 NVMe Specification Version (Identify): 1.3 00:31:26.592 Maximum Queue Entries: 1024 00:31:26.592 Contiguous Queues Required: No 00:31:26.592 Arbitration Mechanisms Supported 00:31:26.592 Weighted Round Robin: Not Supported 00:31:26.592 Vendor Specific: Not Supported 00:31:26.592 Reset Timeout: 7500 ms 00:31:26.592 Doorbell Stride: 4 bytes 00:31:26.592 NVM Subsystem Reset: Not Supported 00:31:26.592 Command Sets Supported 00:31:26.592 NVM Command Set: Supported 00:31:26.592 Boot Partition: Not Supported 00:31:26.592 Memory Page Size Minimum: 4096 bytes 00:31:26.592 Memory Page Size Maximum: 4096 bytes 00:31:26.592 Persistent Memory Region: Not Supported 00:31:26.592 Optional Asynchronous Events Supported 00:31:26.592 Namespace Attribute Notices: Not Supported 00:31:26.592 Firmware Activation Notices: Not Supported 00:31:26.592 ANA Change Notices: Not Supported 00:31:26.592 PLE Aggregate Log Change Notices: Not Supported 00:31:26.592 LBA Status Info Alert Notices: Not Supported 00:31:26.592 EGE Aggregate Log Change Notices: Not Supported 00:31:26.592 Normal NVM Subsystem Shutdown event: Not Supported 00:31:26.592 Zone Descriptor Change Notices: Not Supported 00:31:26.592 Discovery Log Change Notices: Supported 00:31:26.592 Controller Attributes 00:31:26.592 128-bit Host Identifier: Not Supported 00:31:26.592 Non-Operational Permissive Mode: Not Supported 00:31:26.592 NVM Sets: Not Supported 00:31:26.592 Read Recovery Levels: Not Supported 00:31:26.592 Endurance Groups: Not Supported 00:31:26.592 Predictable Latency Mode: Not Supported 00:31:26.592 Traffic Based Keep ALive: Not Supported 00:31:26.592 Namespace Granularity: Not Supported 00:31:26.592 SQ Associations: Not Supported 00:31:26.592 UUID List: Not Supported 00:31:26.592 Multi-Domain Subsystem: Not Supported 00:31:26.592 Fixed Capacity Management: Not Supported 00:31:26.592 Variable Capacity Management: Not Supported 00:31:26.592 Delete Endurance Group: Not Supported 00:31:26.592 Delete NVM Set: Not Supported 00:31:26.592 Extended LBA Formats Supported: Not Supported 00:31:26.592 Flexible Data Placement Supported: Not Supported 00:31:26.592 00:31:26.592 Controller Memory Buffer Support 00:31:26.592 ================================ 00:31:26.592 Supported: No 00:31:26.592 00:31:26.592 Persistent Memory Region Support 00:31:26.592 ================================ 00:31:26.592 Supported: No 00:31:26.592 00:31:26.592 Admin Command Set Attributes 00:31:26.592 ============================ 00:31:26.592 Security Send/Receive: Not Supported 00:31:26.592 Format NVM: Not Supported 00:31:26.593 Firmware Activate/Download: Not Supported 00:31:26.593 Namespace Management: Not Supported 00:31:26.593 Device Self-Test: Not Supported 00:31:26.593 Directives: Not Supported 00:31:26.593 NVMe-MI: Not Supported 00:31:26.593 Virtualization Management: Not Supported 00:31:26.593 Doorbell Buffer Config: Not Supported 00:31:26.593 Get LBA Status Capability: Not Supported 00:31:26.593 Command & Feature Lockdown Capability: Not Supported 00:31:26.593 Abort Command Limit: 1 00:31:26.593 Async Event Request Limit: 1 00:31:26.593 Number of Firmware Slots: N/A 00:31:26.593 Firmware Slot 1 Read-Only: N/A 00:31:26.593 Firmware Activation Without Reset: N/A 00:31:26.593 Multiple Update Detection Support: N/A 00:31:26.593 Firmware Update Granularity: No Information Provided 00:31:26.593 Per-Namespace SMART Log: No 00:31:26.593 Asymmetric Namespace Access Log Page: Not Supported 00:31:26.593 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:26.593 Command Effects Log Page: Not Supported 00:31:26.593 Get Log Page Extended Data: Supported 00:31:26.593 Telemetry Log Pages: Not Supported 00:31:26.593 Persistent Event Log Pages: Not Supported 00:31:26.593 Supported Log Pages Log Page: May Support 00:31:26.593 Commands Supported & Effects Log Page: Not Supported 00:31:26.593 Feature Identifiers & Effects Log Page:May Support 00:31:26.593 NVMe-MI Commands & Effects Log Page: May Support 00:31:26.593 Data Area 4 for Telemetry Log: Not Supported 00:31:26.593 Error Log Page Entries Supported: 1 00:31:26.593 Keep Alive: Not Supported 00:31:26.593 00:31:26.593 NVM Command Set Attributes 00:31:26.593 ========================== 00:31:26.593 Submission Queue Entry Size 00:31:26.593 Max: 1 00:31:26.593 Min: 1 00:31:26.593 Completion Queue Entry Size 00:31:26.593 Max: 1 00:31:26.593 Min: 1 00:31:26.593 Number of Namespaces: 0 00:31:26.593 Compare Command: Not Supported 00:31:26.593 Write Uncorrectable Command: Not Supported 00:31:26.593 Dataset Management Command: Not Supported 00:31:26.593 Write Zeroes Command: Not Supported 00:31:26.593 Set Features Save Field: Not Supported 00:31:26.593 Reservations: Not Supported 00:31:26.593 Timestamp: Not Supported 00:31:26.593 Copy: Not Supported 00:31:26.593 Volatile Write Cache: Not Present 00:31:26.593 Atomic Write Unit (Normal): 1 00:31:26.593 Atomic Write Unit (PFail): 1 00:31:26.593 Atomic Compare & Write Unit: 1 00:31:26.593 Fused Compare & Write: Not Supported 00:31:26.593 Scatter-Gather List 00:31:26.593 SGL Command Set: Supported 00:31:26.593 SGL Keyed: Not Supported 00:31:26.593 SGL Bit Bucket Descriptor: Not Supported 00:31:26.593 SGL Metadata Pointer: Not Supported 00:31:26.593 Oversized SGL: Not Supported 00:31:26.593 SGL Metadata Address: Not Supported 00:31:26.593 SGL Offset: Supported 00:31:26.593 Transport SGL Data Block: Not Supported 00:31:26.593 Replay Protected Memory Block: Not Supported 00:31:26.593 00:31:26.593 Firmware Slot Information 00:31:26.593 ========================= 00:31:26.593 Active slot: 0 00:31:26.593 00:31:26.593 00:31:26.593 Error Log 00:31:26.593 ========= 00:31:26.593 00:31:26.593 Active Namespaces 00:31:26.593 ================= 00:31:26.593 Discovery Log Page 00:31:26.593 ================== 00:31:26.593 Generation Counter: 2 00:31:26.593 Number of Records: 2 00:31:26.593 Record Format: 0 00:31:26.593 00:31:26.593 Discovery Log Entry 0 00:31:26.593 ---------------------- 00:31:26.593 Transport Type: 3 (TCP) 00:31:26.593 Address Family: 1 (IPv4) 00:31:26.593 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:26.593 Entry Flags: 00:31:26.593 Duplicate Returned Information: 0 00:31:26.593 Explicit Persistent Connection Support for Discovery: 0 00:31:26.593 Transport Requirements: 00:31:26.593 Secure Channel: Not Specified 00:31:26.593 Port ID: 1 (0x0001) 00:31:26.593 Controller ID: 65535 (0xffff) 00:31:26.593 Admin Max SQ Size: 32 00:31:26.593 Transport Service Identifier: 4420 00:31:26.593 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:26.593 Transport Address: 10.0.0.1 00:31:26.593 Discovery Log Entry 1 00:31:26.593 ---------------------- 00:31:26.593 Transport Type: 3 (TCP) 00:31:26.593 Address Family: 1 (IPv4) 00:31:26.593 Subsystem Type: 2 (NVM Subsystem) 00:31:26.593 Entry Flags: 00:31:26.593 Duplicate Returned Information: 0 00:31:26.593 Explicit Persistent Connection Support for Discovery: 0 00:31:26.593 Transport Requirements: 00:31:26.593 Secure Channel: Not Specified 00:31:26.593 Port ID: 1 (0x0001) 00:31:26.593 Controller ID: 65535 (0xffff) 00:31:26.593 Admin Max SQ Size: 32 00:31:26.593 Transport Service Identifier: 4420 00:31:26.593 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:26.593 Transport Address: 10.0.0.1 00:31:26.593 21:34:15 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:26.875 get_feature(0x01) failed 00:31:26.875 get_feature(0x02) failed 00:31:26.875 get_feature(0x04) failed 00:31:26.875 ===================================================== 00:31:26.875 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:26.875 ===================================================== 00:31:26.875 Controller Capabilities/Features 00:31:26.875 ================================ 00:31:26.875 Vendor ID: 0000 00:31:26.875 Subsystem Vendor ID: 0000 00:31:26.875 Serial Number: bcc3584abec5bd2ac1de 00:31:26.875 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:26.875 Firmware Version: 6.7.0-68 00:31:26.875 Recommended Arb Burst: 6 00:31:26.875 IEEE OUI Identifier: 00 00 00 00:31:26.875 Multi-path I/O 00:31:26.875 May have multiple subsystem ports: Yes 00:31:26.875 May have multiple controllers: Yes 00:31:26.875 Associated with SR-IOV VF: No 00:31:26.875 Max Data Transfer Size: Unlimited 00:31:26.875 Max Number of Namespaces: 1024 00:31:26.875 Max Number of I/O Queues: 128 00:31:26.875 NVMe Specification Version (VS): 1.3 00:31:26.875 NVMe Specification Version (Identify): 1.3 00:31:26.875 Maximum Queue Entries: 1024 00:31:26.875 Contiguous Queues Required: No 00:31:26.875 Arbitration Mechanisms Supported 00:31:26.875 Weighted Round Robin: Not Supported 00:31:26.875 Vendor Specific: Not Supported 00:31:26.875 Reset Timeout: 7500 ms 00:31:26.875 Doorbell Stride: 4 bytes 00:31:26.875 NVM Subsystem Reset: Not Supported 00:31:26.875 Command Sets Supported 00:31:26.875 NVM Command Set: Supported 00:31:26.875 Boot Partition: Not Supported 00:31:26.875 Memory Page Size Minimum: 4096 bytes 00:31:26.875 Memory Page Size Maximum: 4096 bytes 00:31:26.875 Persistent Memory Region: Not Supported 00:31:26.875 Optional Asynchronous Events Supported 00:31:26.875 Namespace Attribute Notices: Supported 00:31:26.875 Firmware Activation Notices: Not Supported 00:31:26.875 ANA Change Notices: Supported 00:31:26.875 PLE Aggregate Log Change Notices: Not Supported 00:31:26.875 LBA Status Info Alert Notices: Not Supported 00:31:26.875 EGE Aggregate Log Change Notices: Not Supported 00:31:26.875 Normal NVM Subsystem Shutdown event: Not Supported 00:31:26.875 Zone Descriptor Change Notices: Not Supported 00:31:26.875 Discovery Log Change Notices: Not Supported 00:31:26.875 Controller Attributes 00:31:26.875 128-bit Host Identifier: Supported 00:31:26.875 Non-Operational Permissive Mode: Not Supported 00:31:26.875 NVM Sets: Not Supported 00:31:26.875 Read Recovery Levels: Not Supported 00:31:26.875 Endurance Groups: Not Supported 00:31:26.875 Predictable Latency Mode: Not Supported 00:31:26.875 Traffic Based Keep ALive: Supported 00:31:26.875 Namespace Granularity: Not Supported 00:31:26.875 SQ Associations: Not Supported 00:31:26.875 UUID List: Not Supported 00:31:26.875 Multi-Domain Subsystem: Not Supported 00:31:26.875 Fixed Capacity Management: Not Supported 00:31:26.875 Variable Capacity Management: Not Supported 00:31:26.875 Delete Endurance Group: Not Supported 00:31:26.875 Delete NVM Set: Not Supported 00:31:26.875 Extended LBA Formats Supported: Not Supported 00:31:26.875 Flexible Data Placement Supported: Not Supported 00:31:26.875 00:31:26.875 Controller Memory Buffer Support 00:31:26.875 ================================ 00:31:26.875 Supported: No 00:31:26.875 00:31:26.875 Persistent Memory Region Support 00:31:26.875 ================================ 00:31:26.875 Supported: No 00:31:26.875 00:31:26.875 Admin Command Set Attributes 00:31:26.875 ============================ 00:31:26.875 Security Send/Receive: Not Supported 00:31:26.875 Format NVM: Not Supported 00:31:26.875 Firmware Activate/Download: Not Supported 00:31:26.875 Namespace Management: Not Supported 00:31:26.875 Device Self-Test: Not Supported 00:31:26.875 Directives: Not Supported 00:31:26.875 NVMe-MI: Not Supported 00:31:26.875 Virtualization Management: Not Supported 00:31:26.875 Doorbell Buffer Config: Not Supported 00:31:26.875 Get LBA Status Capability: Not Supported 00:31:26.875 Command & Feature Lockdown Capability: Not Supported 00:31:26.875 Abort Command Limit: 4 00:31:26.875 Async Event Request Limit: 4 00:31:26.875 Number of Firmware Slots: N/A 00:31:26.875 Firmware Slot 1 Read-Only: N/A 00:31:26.875 Firmware Activation Without Reset: N/A 00:31:26.875 Multiple Update Detection Support: N/A 00:31:26.875 Firmware Update Granularity: No Information Provided 00:31:26.875 Per-Namespace SMART Log: Yes 00:31:26.875 Asymmetric Namespace Access Log Page: Supported 00:31:26.875 ANA Transition Time : 10 sec 00:31:26.875 00:31:26.875 Asymmetric Namespace Access Capabilities 00:31:26.875 ANA Optimized State : Supported 00:31:26.875 ANA Non-Optimized State : Supported 00:31:26.875 ANA Inaccessible State : Supported 00:31:26.876 ANA Persistent Loss State : Supported 00:31:26.876 ANA Change State : Supported 00:31:26.876 ANAGRPID is not changed : No 00:31:26.876 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:26.876 00:31:26.876 ANA Group Identifier Maximum : 128 00:31:26.876 Number of ANA Group Identifiers : 128 00:31:26.876 Max Number of Allowed Namespaces : 1024 00:31:26.876 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:26.876 Command Effects Log Page: Supported 00:31:26.876 Get Log Page Extended Data: Supported 00:31:26.876 Telemetry Log Pages: Not Supported 00:31:26.876 Persistent Event Log Pages: Not Supported 00:31:26.876 Supported Log Pages Log Page: May Support 00:31:26.876 Commands Supported & Effects Log Page: Not Supported 00:31:26.876 Feature Identifiers & Effects Log Page:May Support 00:31:26.876 NVMe-MI Commands & Effects Log Page: May Support 00:31:26.876 Data Area 4 for Telemetry Log: Not Supported 00:31:26.876 Error Log Page Entries Supported: 128 00:31:26.876 Keep Alive: Supported 00:31:26.876 Keep Alive Granularity: 1000 ms 00:31:26.876 00:31:26.876 NVM Command Set Attributes 00:31:26.876 ========================== 00:31:26.876 Submission Queue Entry Size 00:31:26.876 Max: 64 00:31:26.876 Min: 64 00:31:26.876 Completion Queue Entry Size 00:31:26.876 Max: 16 00:31:26.876 Min: 16 00:31:26.876 Number of Namespaces: 1024 00:31:26.876 Compare Command: Not Supported 00:31:26.876 Write Uncorrectable Command: Not Supported 00:31:26.876 Dataset Management Command: Supported 00:31:26.876 Write Zeroes Command: Supported 00:31:26.876 Set Features Save Field: Not Supported 00:31:26.876 Reservations: Not Supported 00:31:26.876 Timestamp: Not Supported 00:31:26.876 Copy: Not Supported 00:31:26.876 Volatile Write Cache: Present 00:31:26.876 Atomic Write Unit (Normal): 1 00:31:26.876 Atomic Write Unit (PFail): 1 00:31:26.876 Atomic Compare & Write Unit: 1 00:31:26.876 Fused Compare & Write: Not Supported 00:31:26.876 Scatter-Gather List 00:31:26.876 SGL Command Set: Supported 00:31:26.876 SGL Keyed: Not Supported 00:31:26.876 SGL Bit Bucket Descriptor: Not Supported 00:31:26.876 SGL Metadata Pointer: Not Supported 00:31:26.876 Oversized SGL: Not Supported 00:31:26.876 SGL Metadata Address: Not Supported 00:31:26.876 SGL Offset: Supported 00:31:26.876 Transport SGL Data Block: Not Supported 00:31:26.876 Replay Protected Memory Block: Not Supported 00:31:26.876 00:31:26.876 Firmware Slot Information 00:31:26.876 ========================= 00:31:26.876 Active slot: 0 00:31:26.876 00:31:26.876 Asymmetric Namespace Access 00:31:26.876 =========================== 00:31:26.876 Change Count : 0 00:31:26.876 Number of ANA Group Descriptors : 1 00:31:26.876 ANA Group Descriptor : 0 00:31:26.876 ANA Group ID : 1 00:31:26.876 Number of NSID Values : 1 00:31:26.876 Change Count : 0 00:31:26.876 ANA State : 1 00:31:26.876 Namespace Identifier : 1 00:31:26.876 00:31:26.876 Commands Supported and Effects 00:31:26.876 ============================== 00:31:26.876 Admin Commands 00:31:26.876 -------------- 00:31:26.876 Get Log Page (02h): Supported 00:31:26.876 Identify (06h): Supported 00:31:26.876 Abort (08h): Supported 00:31:26.876 Set Features (09h): Supported 00:31:26.876 Get Features (0Ah): Supported 00:31:26.876 Asynchronous Event Request (0Ch): Supported 00:31:26.876 Keep Alive (18h): Supported 00:31:26.876 I/O Commands 00:31:26.876 ------------ 00:31:26.876 Flush (00h): Supported 00:31:26.876 Write (01h): Supported LBA-Change 00:31:26.876 Read (02h): Supported 00:31:26.876 Write Zeroes (08h): Supported LBA-Change 00:31:26.876 Dataset Management (09h): Supported 00:31:26.876 00:31:26.876 Error Log 00:31:26.876 ========= 00:31:26.876 Entry: 0 00:31:26.876 Error Count: 0x3 00:31:26.876 Submission Queue Id: 0x0 00:31:26.876 Command Id: 0x5 00:31:26.876 Phase Bit: 0 00:31:26.876 Status Code: 0x2 00:31:26.876 Status Code Type: 0x0 00:31:26.876 Do Not Retry: 1 00:31:26.876 Error Location: 0x28 00:31:26.876 LBA: 0x0 00:31:26.876 Namespace: 0x0 00:31:26.876 Vendor Log Page: 0x0 00:31:26.876 ----------- 00:31:26.876 Entry: 1 00:31:26.876 Error Count: 0x2 00:31:26.876 Submission Queue Id: 0x0 00:31:26.876 Command Id: 0x5 00:31:26.876 Phase Bit: 0 00:31:26.876 Status Code: 0x2 00:31:26.876 Status Code Type: 0x0 00:31:26.876 Do Not Retry: 1 00:31:26.876 Error Location: 0x28 00:31:26.876 LBA: 0x0 00:31:26.876 Namespace: 0x0 00:31:26.876 Vendor Log Page: 0x0 00:31:26.876 ----------- 00:31:26.876 Entry: 2 00:31:26.876 Error Count: 0x1 00:31:26.876 Submission Queue Id: 0x0 00:31:26.876 Command Id: 0x4 00:31:26.876 Phase Bit: 0 00:31:26.876 Status Code: 0x2 00:31:26.876 Status Code Type: 0x0 00:31:26.876 Do Not Retry: 1 00:31:26.876 Error Location: 0x28 00:31:26.876 LBA: 0x0 00:31:26.876 Namespace: 0x0 00:31:26.876 Vendor Log Page: 0x0 00:31:26.876 00:31:26.876 Number of Queues 00:31:26.876 ================ 00:31:26.876 Number of I/O Submission Queues: 128 00:31:26.876 Number of I/O Completion Queues: 128 00:31:26.876 00:31:26.876 ZNS Specific Controller Data 00:31:26.876 ============================ 00:31:26.876 Zone Append Size Limit: 0 00:31:26.876 00:31:26.876 00:31:26.876 Active Namespaces 00:31:26.876 ================= 00:31:26.876 get_feature(0x05) failed 00:31:26.876 Namespace ID:1 00:31:26.876 Command Set Identifier: NVM (00h) 00:31:26.876 Deallocate: Supported 00:31:26.876 Deallocated/Unwritten Error: Not Supported 00:31:26.876 Deallocated Read Value: Unknown 00:31:26.876 Deallocate in Write Zeroes: Not Supported 00:31:26.876 Deallocated Guard Field: 0xFFFF 00:31:26.876 Flush: Supported 00:31:26.876 Reservation: Not Supported 00:31:26.876 Namespace Sharing Capabilities: Multiple Controllers 00:31:26.876 Size (in LBAs): 1310720 (5GiB) 00:31:26.876 Capacity (in LBAs): 1310720 (5GiB) 00:31:26.876 Utilization (in LBAs): 1310720 (5GiB) 00:31:26.876 UUID: e905fc8f-2724-4f13-b834-64016294cdf9 00:31:26.876 Thin Provisioning: Not Supported 00:31:26.876 Per-NS Atomic Units: Yes 00:31:26.876 Atomic Boundary Size (Normal): 0 00:31:26.876 Atomic Boundary Size (PFail): 0 00:31:26.876 Atomic Boundary Offset: 0 00:31:26.876 NGUID/EUI64 Never Reused: No 00:31:26.876 ANA group ID: 1 00:31:26.876 Namespace Write Protected: No 00:31:26.876 Number of LBA Formats: 1 00:31:26.876 Current LBA Format: LBA Format #00 00:31:26.876 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:31:26.876 00:31:26.876 21:34:15 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:26.876 21:34:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:26.876 21:34:15 -- nvmf/common.sh@117 -- # sync 00:31:26.876 21:34:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:26.876 21:34:16 -- nvmf/common.sh@120 -- # set +e 00:31:26.876 21:34:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:26.876 21:34:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:26.876 rmmod nvme_tcp 00:31:26.876 rmmod nvme_fabrics 00:31:26.876 21:34:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:26.876 21:34:16 -- nvmf/common.sh@124 -- # set -e 00:31:26.876 21:34:16 -- nvmf/common.sh@125 -- # return 0 00:31:26.876 21:34:16 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:31:26.876 21:34:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:26.876 21:34:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:26.876 21:34:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:26.876 21:34:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:26.876 21:34:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:26.876 21:34:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.876 21:34:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:26.876 21:34:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:27.156 21:34:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:27.156 21:34:16 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:27.156 21:34:16 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:27.156 21:34:16 -- nvmf/common.sh@675 -- # echo 0 00:31:27.157 21:34:16 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:27.157 21:34:16 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:27.157 21:34:16 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:27.157 21:34:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:27.157 21:34:16 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:31:27.157 21:34:16 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:31:27.157 21:34:16 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:27.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:28.007 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:28.007 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:28.007 00:31:28.007 real 0m3.002s 00:31:28.007 user 0m0.968s 00:31:28.007 sys 0m1.537s 00:31:28.007 21:34:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:28.007 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:31:28.007 ************************************ 00:31:28.007 END TEST nvmf_identify_kernel_target 00:31:28.007 ************************************ 00:31:28.007 21:34:17 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:28.007 21:34:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:28.007 21:34:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:28.007 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:31:28.283 ************************************ 00:31:28.283 START TEST nvmf_auth 00:31:28.283 ************************************ 00:31:28.283 21:34:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:28.283 * Looking for test storage... 00:31:28.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:28.283 21:34:17 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:28.283 21:34:17 -- nvmf/common.sh@7 -- # uname -s 00:31:28.283 21:34:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.283 21:34:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.283 21:34:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.283 21:34:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.283 21:34:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.283 21:34:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.283 21:34:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.283 21:34:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.283 21:34:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.283 21:34:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.283 21:34:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:31:28.283 21:34:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:31:28.283 21:34:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.284 21:34:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.284 21:34:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:28.284 21:34:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.284 21:34:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:28.284 21:34:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.284 21:34:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.284 21:34:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.284 21:34:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.284 21:34:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.284 21:34:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.284 21:34:17 -- paths/export.sh@5 -- # export PATH 00:31:28.284 21:34:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.284 21:34:17 -- nvmf/common.sh@47 -- # : 0 00:31:28.284 21:34:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:28.284 21:34:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:28.284 21:34:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.284 21:34:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.284 21:34:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.284 21:34:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:28.284 21:34:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:28.284 21:34:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:28.284 21:34:17 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:28.284 21:34:17 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:28.284 21:34:17 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:28.284 21:34:17 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:28.284 21:34:17 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:28.284 21:34:17 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:28.284 21:34:17 -- host/auth.sh@21 -- # keys=() 00:31:28.284 21:34:17 -- host/auth.sh@77 -- # nvmftestinit 00:31:28.284 21:34:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:28.284 21:34:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.284 21:34:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:28.284 21:34:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:28.284 21:34:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:28.284 21:34:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.284 21:34:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.284 21:34:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.284 21:34:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:28.284 21:34:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:28.284 21:34:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:28.284 21:34:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:28.284 21:34:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:28.284 21:34:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:28.284 21:34:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.284 21:34:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.284 21:34:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:28.284 21:34:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:28.284 21:34:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:28.284 21:34:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:28.284 21:34:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:28.284 21:34:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.284 21:34:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:28.284 21:34:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:28.284 21:34:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:28.284 21:34:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:28.284 21:34:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:28.284 21:34:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:28.284 Cannot find device "nvmf_tgt_br" 00:31:28.284 21:34:17 -- nvmf/common.sh@155 -- # true 00:31:28.284 21:34:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:28.284 Cannot find device "nvmf_tgt_br2" 00:31:28.284 21:34:17 -- nvmf/common.sh@156 -- # true 00:31:28.284 21:34:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:28.284 21:34:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:28.545 Cannot find device "nvmf_tgt_br" 00:31:28.545 21:34:17 -- nvmf/common.sh@158 -- # true 00:31:28.545 21:34:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:28.545 Cannot find device "nvmf_tgt_br2" 00:31:28.545 21:34:17 -- nvmf/common.sh@159 -- # true 00:31:28.545 21:34:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:28.545 21:34:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:28.545 21:34:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:28.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:28.545 21:34:17 -- nvmf/common.sh@162 -- # true 00:31:28.545 21:34:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:28.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:28.545 21:34:17 -- nvmf/common.sh@163 -- # true 00:31:28.545 21:34:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:28.545 21:34:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:28.545 21:34:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:28.545 21:34:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:28.545 21:34:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:28.545 21:34:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:28.545 21:34:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:28.545 21:34:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:28.545 21:34:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:28.545 21:34:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:28.545 21:34:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:28.545 21:34:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:28.545 21:34:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:28.545 21:34:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:28.545 21:34:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:28.545 21:34:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:28.545 21:34:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:28.545 21:34:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:28.545 21:34:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:28.545 21:34:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:28.545 21:34:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:28.545 21:34:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:28.545 21:34:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:28.545 21:34:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:28.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:31:28.545 00:31:28.545 --- 10.0.0.2 ping statistics --- 00:31:28.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.545 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:31:28.545 21:34:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:28.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:28.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:31:28.804 00:31:28.804 --- 10.0.0.3 ping statistics --- 00:31:28.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.804 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:31:28.804 21:34:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:28.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:31:28.804 00:31:28.804 --- 10.0.0.1 ping statistics --- 00:31:28.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.804 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:31:28.804 21:34:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.804 21:34:17 -- nvmf/common.sh@422 -- # return 0 00:31:28.804 21:34:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:28.804 21:34:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.804 21:34:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:28.804 21:34:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:28.804 21:34:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.804 21:34:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:28.804 21:34:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:28.804 21:34:17 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:31:28.804 21:34:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:28.804 21:34:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:28.804 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:31:28.804 21:34:17 -- nvmf/common.sh@470 -- # nvmfpid=102740 00:31:28.804 21:34:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:28.804 21:34:17 -- nvmf/common.sh@471 -- # waitforlisten 102740 00:31:28.804 21:34:17 -- common/autotest_common.sh@817 -- # '[' -z 102740 ']' 00:31:28.804 21:34:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.804 21:34:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:28.804 21:34:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.804 21:34:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:28.804 21:34:17 -- common/autotest_common.sh@10 -- # set +x 00:31:29.740 21:34:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:29.740 21:34:18 -- common/autotest_common.sh@850 -- # return 0 00:31:29.740 21:34:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:29.740 21:34:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:29.740 21:34:18 -- common/autotest_common.sh@10 -- # set +x 00:31:29.740 21:34:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:29.740 21:34:18 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:29.740 21:34:18 -- host/auth.sh@81 -- # gen_key null 32 00:31:29.740 21:34:18 -- host/auth.sh@53 -- # local digest len file key 00:31:29.740 21:34:18 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:29.740 21:34:18 -- host/auth.sh@54 -- # local -A digests 00:31:29.740 21:34:18 -- host/auth.sh@56 -- # digest=null 00:31:29.740 21:34:18 -- host/auth.sh@56 -- # len=32 00:31:29.740 21:34:18 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:29.741 21:34:18 -- host/auth.sh@57 -- # key=8d50eef70971a28f3e26a9ee49cb4655 00:31:29.741 21:34:18 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:31:29.741 21:34:18 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.tPQ 00:31:29.741 21:34:18 -- host/auth.sh@59 -- # format_dhchap_key 8d50eef70971a28f3e26a9ee49cb4655 0 00:31:29.741 21:34:18 -- nvmf/common.sh@708 -- # format_key DHHC-1 8d50eef70971a28f3e26a9ee49cb4655 0 00:31:29.741 21:34:18 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:29.741 21:34:18 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:31:29.741 21:34:18 -- nvmf/common.sh@693 -- # key=8d50eef70971a28f3e26a9ee49cb4655 00:31:29.741 21:34:18 -- nvmf/common.sh@693 -- # digest=0 00:31:29.741 21:34:18 -- nvmf/common.sh@694 -- # python - 00:31:29.741 21:34:18 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.tPQ 00:31:29.741 21:34:18 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.tPQ 00:31:29.741 21:34:18 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.tPQ 00:31:29.741 21:34:18 -- host/auth.sh@82 -- # gen_key null 48 00:31:29.741 21:34:18 -- host/auth.sh@53 -- # local digest len file key 00:31:29.741 21:34:18 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:29.741 21:34:18 -- host/auth.sh@54 -- # local -A digests 00:31:29.741 21:34:18 -- host/auth.sh@56 -- # digest=null 00:31:29.741 21:34:18 -- host/auth.sh@56 -- # len=48 00:31:29.741 21:34:18 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:29.741 21:34:18 -- host/auth.sh@57 -- # key=235c80a55e9df4a84ff15b0feb7c420adf3efb29d81a2807 00:31:29.741 21:34:18 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:31:29.741 21:34:18 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.dFH 00:31:29.741 21:34:18 -- host/auth.sh@59 -- # format_dhchap_key 235c80a55e9df4a84ff15b0feb7c420adf3efb29d81a2807 0 00:31:29.741 21:34:18 -- nvmf/common.sh@708 -- # format_key DHHC-1 235c80a55e9df4a84ff15b0feb7c420adf3efb29d81a2807 0 00:31:29.741 21:34:18 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:29.741 21:34:18 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:31:29.741 21:34:18 -- nvmf/common.sh@693 -- # key=235c80a55e9df4a84ff15b0feb7c420adf3efb29d81a2807 00:31:29.741 21:34:18 -- nvmf/common.sh@693 -- # digest=0 00:31:29.741 21:34:18 -- nvmf/common.sh@694 -- # python - 00:31:29.741 21:34:18 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.dFH 00:31:30.001 21:34:18 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.dFH 00:31:30.001 21:34:18 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.dFH 00:31:30.001 21:34:18 -- host/auth.sh@83 -- # gen_key sha256 32 00:31:30.001 21:34:18 -- host/auth.sh@53 -- # local digest len file key 00:31:30.001 21:34:18 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:30.001 21:34:18 -- host/auth.sh@54 -- # local -A digests 00:31:30.001 21:34:18 -- host/auth.sh@56 -- # digest=sha256 00:31:30.001 21:34:18 -- host/auth.sh@56 -- # len=32 00:31:30.001 21:34:18 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:30.001 21:34:18 -- host/auth.sh@57 -- # key=1b55dca104f021b338b7fb164e89f0f3 00:31:30.001 21:34:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:31:30.001 21:34:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.iG1 00:31:30.001 21:34:19 -- host/auth.sh@59 -- # format_dhchap_key 1b55dca104f021b338b7fb164e89f0f3 1 00:31:30.001 21:34:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 1b55dca104f021b338b7fb164e89f0f3 1 00:31:30.001 21:34:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:30.001 21:34:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:31:30.001 21:34:19 -- nvmf/common.sh@693 -- # key=1b55dca104f021b338b7fb164e89f0f3 00:31:30.001 21:34:19 -- nvmf/common.sh@693 -- # digest=1 00:31:30.001 21:34:19 -- nvmf/common.sh@694 -- # python - 00:31:30.001 21:34:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.iG1 00:31:30.001 21:34:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.iG1 00:31:30.001 21:34:19 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.iG1 00:31:30.001 21:34:19 -- host/auth.sh@84 -- # gen_key sha384 48 00:31:30.001 21:34:19 -- host/auth.sh@53 -- # local digest len file key 00:31:30.001 21:34:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:30.001 21:34:19 -- host/auth.sh@54 -- # local -A digests 00:31:30.001 21:34:19 -- host/auth.sh@56 -- # digest=sha384 00:31:30.001 21:34:19 -- host/auth.sh@56 -- # len=48 00:31:30.001 21:34:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:30.001 21:34:19 -- host/auth.sh@57 -- # key=bf67bd2e9342fb534e51e330fe5ff117a7c8850a3ad0b358 00:31:30.001 21:34:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:31:30.001 21:34:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.53n 00:31:30.001 21:34:19 -- host/auth.sh@59 -- # format_dhchap_key bf67bd2e9342fb534e51e330fe5ff117a7c8850a3ad0b358 2 00:31:30.001 21:34:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 bf67bd2e9342fb534e51e330fe5ff117a7c8850a3ad0b358 2 00:31:30.001 21:34:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:30.001 21:34:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:31:30.001 21:34:19 -- nvmf/common.sh@693 -- # key=bf67bd2e9342fb534e51e330fe5ff117a7c8850a3ad0b358 00:31:30.001 21:34:19 -- nvmf/common.sh@693 -- # digest=2 00:31:30.001 21:34:19 -- nvmf/common.sh@694 -- # python - 00:31:30.001 21:34:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.53n 00:31:30.001 21:34:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.53n 00:31:30.001 21:34:19 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.53n 00:31:30.001 21:34:19 -- host/auth.sh@85 -- # gen_key sha512 64 00:31:30.001 21:34:19 -- host/auth.sh@53 -- # local digest len file key 00:31:30.001 21:34:19 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:30.001 21:34:19 -- host/auth.sh@54 -- # local -A digests 00:31:30.001 21:34:19 -- host/auth.sh@56 -- # digest=sha512 00:31:30.001 21:34:19 -- host/auth.sh@56 -- # len=64 00:31:30.001 21:34:19 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:30.001 21:34:19 -- host/auth.sh@57 -- # key=ccf8f02bd708e2dd5746692643fd6101c3c7e17d1ef2d95bab0487076c50ce48 00:31:30.001 21:34:19 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:31:30.001 21:34:19 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.nvo 00:31:30.001 21:34:19 -- host/auth.sh@59 -- # format_dhchap_key ccf8f02bd708e2dd5746692643fd6101c3c7e17d1ef2d95bab0487076c50ce48 3 00:31:30.001 21:34:19 -- nvmf/common.sh@708 -- # format_key DHHC-1 ccf8f02bd708e2dd5746692643fd6101c3c7e17d1ef2d95bab0487076c50ce48 3 00:31:30.001 21:34:19 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:30.001 21:34:19 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:31:30.001 21:34:19 -- nvmf/common.sh@693 -- # key=ccf8f02bd708e2dd5746692643fd6101c3c7e17d1ef2d95bab0487076c50ce48 00:31:30.001 21:34:19 -- nvmf/common.sh@693 -- # digest=3 00:31:30.001 21:34:19 -- nvmf/common.sh@694 -- # python - 00:31:30.001 21:34:19 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.nvo 00:31:30.001 21:34:19 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.nvo 00:31:30.001 21:34:19 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.nvo 00:31:30.001 21:34:19 -- host/auth.sh@87 -- # waitforlisten 102740 00:31:30.001 21:34:19 -- common/autotest_common.sh@817 -- # '[' -z 102740 ']' 00:31:30.001 21:34:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.001 21:34:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:30.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.001 21:34:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.001 21:34:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:30.001 21:34:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.261 21:34:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:30.261 21:34:19 -- common/autotest_common.sh@850 -- # return 0 00:31:30.261 21:34:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:31:30.261 21:34:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tPQ 00:31:30.261 21:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.261 21:34:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.261 21:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.261 21:34:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:31:30.261 21:34:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dFH 00:31:30.261 21:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.261 21:34:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.261 21:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.261 21:34:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:31:30.261 21:34:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.iG1 00:31:30.261 21:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.261 21:34:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.261 21:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.261 21:34:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:31:30.261 21:34:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.53n 00:31:30.261 21:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.261 21:34:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.261 21:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.261 21:34:19 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:31:30.261 21:34:19 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.nvo 00:31:30.261 21:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:30.261 21:34:19 -- common/autotest_common.sh@10 -- # set +x 00:31:30.261 21:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.261 21:34:19 -- host/auth.sh@92 -- # nvmet_auth_init 00:31:30.520 21:34:19 -- host/auth.sh@35 -- # get_main_ns_ip 00:31:30.520 21:34:19 -- nvmf/common.sh@717 -- # local ip 00:31:30.520 21:34:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:30.520 21:34:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:30.520 21:34:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:30.520 21:34:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:30.520 21:34:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:30.520 21:34:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:30.520 21:34:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:30.520 21:34:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:30.520 21:34:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:30.520 21:34:19 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:30.520 21:34:19 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:30.520 21:34:19 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:31:30.520 21:34:19 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:30.520 21:34:19 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:30.520 21:34:19 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:30.520 21:34:19 -- nvmf/common.sh@628 -- # local block nvme 00:31:30.520 21:34:19 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:31:30.520 21:34:19 -- nvmf/common.sh@631 -- # modprobe nvmet 00:31:30.520 21:34:19 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:30.521 21:34:19 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:30.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:31.039 Waiting for block devices as requested 00:31:31.039 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:31.039 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:31.978 21:34:20 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:31.978 21:34:20 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:31.978 21:34:20 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:31:31.978 21:34:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:31.978 21:34:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:31.978 21:34:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:31.978 21:34:20 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:31:31.978 21:34:20 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:31.978 21:34:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:31.979 No valid GPT data, bailing 00:31:31.979 21:34:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:31.979 21:34:20 -- scripts/common.sh@391 -- # pt= 00:31:31.979 21:34:20 -- scripts/common.sh@392 -- # return 1 00:31:31.979 21:34:20 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:31:31.979 21:34:20 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:31.979 21:34:20 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:31.979 21:34:20 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:31:31.979 21:34:20 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:31:31.979 21:34:20 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:31.979 21:34:20 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:31.979 21:34:20 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:31:31.979 21:34:20 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:31:31.979 21:34:20 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:31.979 No valid GPT data, bailing 00:31:31.979 21:34:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:31.979 21:34:21 -- scripts/common.sh@391 -- # pt= 00:31:31.979 21:34:21 -- scripts/common.sh@392 -- # return 1 00:31:31.979 21:34:21 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:31:31.979 21:34:21 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:31.979 21:34:21 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:31.979 21:34:21 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:31:31.979 21:34:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:31:31.979 21:34:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:31.979 21:34:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:31.979 21:34:21 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:31:31.979 21:34:21 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:31:31.979 21:34:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:31.979 No valid GPT data, bailing 00:31:31.979 21:34:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:31.979 21:34:21 -- scripts/common.sh@391 -- # pt= 00:31:31.979 21:34:21 -- scripts/common.sh@392 -- # return 1 00:31:31.979 21:34:21 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:31:31.979 21:34:21 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:31.979 21:34:21 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:31.979 21:34:21 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:31:31.979 21:34:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:31:31.979 21:34:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:31.979 21:34:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:31.979 21:34:21 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:31:31.979 21:34:21 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:31:31.979 21:34:21 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:31.979 No valid GPT data, bailing 00:31:31.979 21:34:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:31.979 21:34:21 -- scripts/common.sh@391 -- # pt= 00:31:31.979 21:34:21 -- scripts/common.sh@392 -- # return 1 00:31:31.979 21:34:21 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:31:31.979 21:34:21 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:31:31.979 21:34:21 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:31.979 21:34:21 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:31.979 21:34:21 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:31.979 21:34:21 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:31.979 21:34:21 -- nvmf/common.sh@656 -- # echo 1 00:31:31.979 21:34:21 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:31:31.979 21:34:21 -- nvmf/common.sh@658 -- # echo 1 00:31:31.979 21:34:21 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:31:31.979 21:34:21 -- nvmf/common.sh@661 -- # echo tcp 00:31:31.979 21:34:21 -- nvmf/common.sh@662 -- # echo 4420 00:31:31.979 21:34:21 -- nvmf/common.sh@663 -- # echo ipv4 00:31:31.979 21:34:21 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:31.979 21:34:21 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -a 10.0.0.1 -t tcp -s 4420 00:31:31.979 00:31:31.979 Discovery Log Number of Records 2, Generation counter 2 00:31:31.979 =====Discovery Log Entry 0====== 00:31:31.979 trtype: tcp 00:31:31.979 adrfam: ipv4 00:31:31.979 subtype: current discovery subsystem 00:31:31.979 treq: not specified, sq flow control disable supported 00:31:31.979 portid: 1 00:31:31.979 trsvcid: 4420 00:31:31.979 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:31.979 traddr: 10.0.0.1 00:31:31.979 eflags: none 00:31:31.979 sectype: none 00:31:31.979 =====Discovery Log Entry 1====== 00:31:31.979 trtype: tcp 00:31:31.979 adrfam: ipv4 00:31:31.979 subtype: nvme subsystem 00:31:31.979 treq: not specified, sq flow control disable supported 00:31:31.979 portid: 1 00:31:31.979 trsvcid: 4420 00:31:31.979 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:31.979 traddr: 10.0.0.1 00:31:31.979 eflags: none 00:31:31.979 sectype: none 00:31:31.979 21:34:21 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:31.979 21:34:21 -- host/auth.sh@37 -- # echo 0 00:31:31.979 21:34:21 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:31.979 21:34:21 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:31.979 21:34:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:31.979 21:34:21 -- host/auth.sh@44 -- # digest=sha256 00:31:31.979 21:34:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:31.979 21:34:21 -- host/auth.sh@44 -- # keyid=1 00:31:31.979 21:34:21 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:31.979 21:34:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:32.239 21:34:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:32.239 21:34:21 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:32.239 21:34:21 -- host/auth.sh@100 -- # IFS=, 00:31:32.239 21:34:21 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:31:32.239 21:34:21 -- host/auth.sh@100 -- # IFS=, 00:31:32.239 21:34:21 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:32.239 21:34:21 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:32.239 21:34:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:32.239 21:34:21 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:31:32.239 21:34:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:32.239 21:34:21 -- host/auth.sh@68 -- # keyid=1 00:31:32.239 21:34:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:32.239 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.239 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.239 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.239 21:34:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:32.239 21:34:21 -- nvmf/common.sh@717 -- # local ip 00:31:32.239 21:34:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:32.239 21:34:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:32.239 21:34:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.239 21:34:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.239 21:34:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:32.239 21:34:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.239 21:34:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:32.239 21:34:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:32.239 21:34:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:32.239 21:34:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:32.239 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.239 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.239 nvme0n1 00:31:32.239 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.239 21:34:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.239 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.240 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.240 21:34:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:32.240 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.500 21:34:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.500 21:34:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.500 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.500 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.500 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.500 21:34:21 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:31:32.500 21:34:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:32.500 21:34:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:32.500 21:34:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:32.500 21:34:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:32.500 21:34:21 -- host/auth.sh@44 -- # digest=sha256 00:31:32.500 21:34:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:32.500 21:34:21 -- host/auth.sh@44 -- # keyid=0 00:31:32.500 21:34:21 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:32.500 21:34:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:32.500 21:34:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:32.500 21:34:21 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:32.500 21:34:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:31:32.500 21:34:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:32.500 21:34:21 -- host/auth.sh@68 -- # digest=sha256 00:31:32.500 21:34:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:32.500 21:34:21 -- host/auth.sh@68 -- # keyid=0 00:31:32.500 21:34:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:32.500 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.500 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.500 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.500 21:34:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:32.500 21:34:21 -- nvmf/common.sh@717 -- # local ip 00:31:32.500 21:34:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:32.500 21:34:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:32.500 21:34:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.500 21:34:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.500 21:34:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:32.500 21:34:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.500 21:34:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:32.500 21:34:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:32.500 21:34:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:32.500 21:34:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:32.500 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.500 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.500 nvme0n1 00:31:32.500 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.500 21:34:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.500 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.500 21:34:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:32.500 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.500 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.500 21:34:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.500 21:34:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.500 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.500 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.500 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.500 21:34:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:32.500 21:34:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:32.500 21:34:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:32.500 21:34:21 -- host/auth.sh@44 -- # digest=sha256 00:31:32.500 21:34:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:32.500 21:34:21 -- host/auth.sh@44 -- # keyid=1 00:31:32.500 21:34:21 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:32.500 21:34:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:32.500 21:34:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:32.500 21:34:21 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:32.500 21:34:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:31:32.500 21:34:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:32.500 21:34:21 -- host/auth.sh@68 -- # digest=sha256 00:31:32.500 21:34:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:32.500 21:34:21 -- host/auth.sh@68 -- # keyid=1 00:31:32.500 21:34:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:32.500 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.500 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.500 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.500 21:34:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:32.500 21:34:21 -- nvmf/common.sh@717 -- # local ip 00:31:32.500 21:34:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:32.500 21:34:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:32.500 21:34:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.500 21:34:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.500 21:34:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:32.500 21:34:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.500 21:34:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:32.500 21:34:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:32.500 21:34:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:32.500 21:34:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:32.501 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.501 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.760 nvme0n1 00:31:32.760 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.760 21:34:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.760 21:34:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:32.761 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.761 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.761 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.761 21:34:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:32.761 21:34:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.761 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.761 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.761 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.761 21:34:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:32.761 21:34:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:32.761 21:34:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:32.761 21:34:21 -- host/auth.sh@44 -- # digest=sha256 00:31:32.761 21:34:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:32.761 21:34:21 -- host/auth.sh@44 -- # keyid=2 00:31:32.761 21:34:21 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:32.761 21:34:21 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:32.761 21:34:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:32.761 21:34:21 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:32.761 21:34:21 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:31:32.761 21:34:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:32.761 21:34:21 -- host/auth.sh@68 -- # digest=sha256 00:31:32.761 21:34:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:32.761 21:34:21 -- host/auth.sh@68 -- # keyid=2 00:31:32.761 21:34:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:32.761 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.761 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.761 21:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.761 21:34:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:32.761 21:34:21 -- nvmf/common.sh@717 -- # local ip 00:31:32.761 21:34:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:32.761 21:34:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:32.761 21:34:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:32.761 21:34:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:32.761 21:34:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:32.761 21:34:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:32.761 21:34:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:32.761 21:34:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:32.761 21:34:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:32.761 21:34:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:32.761 21:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.761 21:34:21 -- common/autotest_common.sh@10 -- # set +x 00:31:32.761 nvme0n1 00:31:32.761 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:32.761 21:34:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:32.761 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:32.761 21:34:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:32.761 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.020 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.020 21:34:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.020 21:34:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.020 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.020 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.020 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.020 21:34:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:33.020 21:34:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:33.020 21:34:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:33.020 21:34:22 -- host/auth.sh@44 -- # digest=sha256 00:31:33.020 21:34:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:33.020 21:34:22 -- host/auth.sh@44 -- # keyid=3 00:31:33.020 21:34:22 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:33.020 21:34:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:33.020 21:34:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:33.020 21:34:22 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:33.020 21:34:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:31:33.020 21:34:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:33.020 21:34:22 -- host/auth.sh@68 -- # digest=sha256 00:31:33.020 21:34:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:33.020 21:34:22 -- host/auth.sh@68 -- # keyid=3 00:31:33.020 21:34:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:33.020 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.020 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.020 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.020 21:34:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:33.020 21:34:22 -- nvmf/common.sh@717 -- # local ip 00:31:33.020 21:34:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:33.020 21:34:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:33.020 21:34:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.020 21:34:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.020 21:34:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:33.020 21:34:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.020 21:34:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:33.020 21:34:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:33.020 21:34:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:33.020 21:34:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:33.020 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.020 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.020 nvme0n1 00:31:33.020 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.020 21:34:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.020 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.020 21:34:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:33.020 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.020 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.020 21:34:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.020 21:34:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.020 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.020 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.021 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.021 21:34:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:33.021 21:34:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:33.021 21:34:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:33.021 21:34:22 -- host/auth.sh@44 -- # digest=sha256 00:31:33.021 21:34:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:33.021 21:34:22 -- host/auth.sh@44 -- # keyid=4 00:31:33.021 21:34:22 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:33.021 21:34:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:33.021 21:34:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:33.021 21:34:22 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:33.280 21:34:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:31:33.280 21:34:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:33.280 21:34:22 -- host/auth.sh@68 -- # digest=sha256 00:31:33.280 21:34:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:33.280 21:34:22 -- host/auth.sh@68 -- # keyid=4 00:31:33.280 21:34:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:33.280 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.280 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.280 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.280 21:34:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:33.280 21:34:22 -- nvmf/common.sh@717 -- # local ip 00:31:33.280 21:34:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:33.280 21:34:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:33.280 21:34:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.280 21:34:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.280 21:34:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:33.280 21:34:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.280 21:34:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:33.280 21:34:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:33.280 21:34:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:33.280 21:34:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:33.280 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.280 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.280 nvme0n1 00:31:33.280 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.280 21:34:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.280 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.280 21:34:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:33.280 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.280 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.280 21:34:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.280 21:34:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.280 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.280 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.280 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.280 21:34:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:33.280 21:34:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:33.280 21:34:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:33.280 21:34:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:33.280 21:34:22 -- host/auth.sh@44 -- # digest=sha256 00:31:33.280 21:34:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:33.280 21:34:22 -- host/auth.sh@44 -- # keyid=0 00:31:33.280 21:34:22 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:33.280 21:34:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:33.280 21:34:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:33.540 21:34:22 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:33.540 21:34:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:31:33.540 21:34:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:33.540 21:34:22 -- host/auth.sh@68 -- # digest=sha256 00:31:33.540 21:34:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:33.540 21:34:22 -- host/auth.sh@68 -- # keyid=0 00:31:33.540 21:34:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:33.540 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.540 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.540 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.540 21:34:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:33.540 21:34:22 -- nvmf/common.sh@717 -- # local ip 00:31:33.540 21:34:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:33.540 21:34:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:33.540 21:34:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.540 21:34:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.540 21:34:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:33.540 21:34:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.540 21:34:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:33.540 21:34:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:33.540 21:34:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:33.540 21:34:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:33.540 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.540 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 nvme0n1 00:31:33.800 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.800 21:34:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.800 21:34:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:33.800 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.800 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.800 21:34:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.800 21:34:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.800 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.800 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.800 21:34:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:33.800 21:34:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:33.800 21:34:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:33.800 21:34:22 -- host/auth.sh@44 -- # digest=sha256 00:31:33.800 21:34:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:33.800 21:34:22 -- host/auth.sh@44 -- # keyid=1 00:31:33.800 21:34:22 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:33.800 21:34:22 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:33.800 21:34:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:33.800 21:34:22 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:33.800 21:34:22 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:31:33.800 21:34:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:33.800 21:34:22 -- host/auth.sh@68 -- # digest=sha256 00:31:33.800 21:34:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:33.800 21:34:22 -- host/auth.sh@68 -- # keyid=1 00:31:33.800 21:34:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:33.800 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.800 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 21:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.800 21:34:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:33.800 21:34:22 -- nvmf/common.sh@717 -- # local ip 00:31:33.800 21:34:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:33.800 21:34:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:33.800 21:34:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.800 21:34:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.800 21:34:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:33.800 21:34:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.800 21:34:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:33.800 21:34:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:33.800 21:34:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:33.800 21:34:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:33.800 21:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.800 21:34:22 -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 nvme0n1 00:31:33.800 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:33.800 21:34:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:33.800 21:34:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.800 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:33.800 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.060 21:34:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.060 21:34:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.060 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.060 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.060 21:34:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:34.060 21:34:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:34.060 21:34:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:34.060 21:34:23 -- host/auth.sh@44 -- # digest=sha256 00:31:34.060 21:34:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.060 21:34:23 -- host/auth.sh@44 -- # keyid=2 00:31:34.060 21:34:23 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:34.060 21:34:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:34.060 21:34:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:34.060 21:34:23 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:34.060 21:34:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:31:34.060 21:34:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:34.060 21:34:23 -- host/auth.sh@68 -- # digest=sha256 00:31:34.060 21:34:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:34.060 21:34:23 -- host/auth.sh@68 -- # keyid=2 00:31:34.060 21:34:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:34.060 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.060 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.060 21:34:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:34.060 21:34:23 -- nvmf/common.sh@717 -- # local ip 00:31:34.060 21:34:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:34.060 21:34:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:34.060 21:34:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.060 21:34:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.060 21:34:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:34.060 21:34:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.060 21:34:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:34.060 21:34:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:34.060 21:34:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:34.060 21:34:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:34.060 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.060 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 nvme0n1 00:31:34.060 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.060 21:34:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.060 21:34:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:34.060 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.060 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.060 21:34:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.060 21:34:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.060 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.060 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.060 21:34:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:34.060 21:34:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:34.060 21:34:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:34.060 21:34:23 -- host/auth.sh@44 -- # digest=sha256 00:31:34.060 21:34:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.060 21:34:23 -- host/auth.sh@44 -- # keyid=3 00:31:34.060 21:34:23 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:34.060 21:34:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:34.060 21:34:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:34.060 21:34:23 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:34.060 21:34:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:31:34.060 21:34:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:34.060 21:34:23 -- host/auth.sh@68 -- # digest=sha256 00:31:34.060 21:34:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:34.060 21:34:23 -- host/auth.sh@68 -- # keyid=3 00:31:34.060 21:34:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:34.060 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.060 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.060 21:34:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:34.060 21:34:23 -- nvmf/common.sh@717 -- # local ip 00:31:34.060 21:34:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:34.060 21:34:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:34.060 21:34:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.060 21:34:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.060 21:34:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:34.060 21:34:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.060 21:34:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:34.060 21:34:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:34.060 21:34:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:34.060 21:34:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:34.060 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.060 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.318 nvme0n1 00:31:34.318 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.318 21:34:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.318 21:34:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:34.318 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.318 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.318 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.318 21:34:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.318 21:34:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.318 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.318 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.318 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.318 21:34:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:34.318 21:34:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:34.318 21:34:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:34.318 21:34:23 -- host/auth.sh@44 -- # digest=sha256 00:31:34.318 21:34:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.318 21:34:23 -- host/auth.sh@44 -- # keyid=4 00:31:34.318 21:34:23 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:34.318 21:34:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:34.318 21:34:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:34.318 21:34:23 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:34.318 21:34:23 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:31:34.318 21:34:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:34.318 21:34:23 -- host/auth.sh@68 -- # digest=sha256 00:31:34.318 21:34:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:34.318 21:34:23 -- host/auth.sh@68 -- # keyid=4 00:31:34.318 21:34:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:34.318 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.318 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.318 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.318 21:34:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:34.318 21:34:23 -- nvmf/common.sh@717 -- # local ip 00:31:34.318 21:34:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:34.318 21:34:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:34.318 21:34:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.318 21:34:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.318 21:34:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:34.318 21:34:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.319 21:34:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:34.319 21:34:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:34.319 21:34:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:34.319 21:34:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:34.319 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.319 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.576 nvme0n1 00:31:34.576 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.576 21:34:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.576 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.576 21:34:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:34.576 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.576 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.576 21:34:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.576 21:34:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.576 21:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:34.576 21:34:23 -- common/autotest_common.sh@10 -- # set +x 00:31:34.576 21:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:34.576 21:34:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:34.576 21:34:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:34.576 21:34:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:34.576 21:34:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:34.576 21:34:23 -- host/auth.sh@44 -- # digest=sha256 00:31:34.576 21:34:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:34.576 21:34:23 -- host/auth.sh@44 -- # keyid=0 00:31:34.576 21:34:23 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:34.576 21:34:23 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:34.576 21:34:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:35.142 21:34:24 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:35.142 21:34:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:31:35.142 21:34:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:35.142 21:34:24 -- host/auth.sh@68 -- # digest=sha256 00:31:35.142 21:34:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:35.142 21:34:24 -- host/auth.sh@68 -- # keyid=0 00:31:35.142 21:34:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:35.142 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.142 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.142 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.142 21:34:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:35.142 21:34:24 -- nvmf/common.sh@717 -- # local ip 00:31:35.142 21:34:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:35.142 21:34:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:35.142 21:34:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.142 21:34:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.142 21:34:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:35.142 21:34:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.142 21:34:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:35.142 21:34:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:35.142 21:34:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:35.142 21:34:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:35.142 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.142 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.436 nvme0n1 00:31:35.436 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.436 21:34:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.436 21:34:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:35.436 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.436 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.436 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.436 21:34:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.436 21:34:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.436 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.436 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.436 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.436 21:34:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:35.436 21:34:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:35.436 21:34:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:35.436 21:34:24 -- host/auth.sh@44 -- # digest=sha256 00:31:35.436 21:34:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:35.436 21:34:24 -- host/auth.sh@44 -- # keyid=1 00:31:35.436 21:34:24 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:35.436 21:34:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:35.436 21:34:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:35.436 21:34:24 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:35.436 21:34:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:31:35.436 21:34:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:35.436 21:34:24 -- host/auth.sh@68 -- # digest=sha256 00:31:35.436 21:34:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:35.436 21:34:24 -- host/auth.sh@68 -- # keyid=1 00:31:35.436 21:34:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:35.436 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.437 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.437 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.437 21:34:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:35.437 21:34:24 -- nvmf/common.sh@717 -- # local ip 00:31:35.437 21:34:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:35.437 21:34:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:35.437 21:34:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.437 21:34:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.437 21:34:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:35.437 21:34:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.437 21:34:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:35.437 21:34:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:35.437 21:34:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:35.437 21:34:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:35.437 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.437 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.709 nvme0n1 00:31:35.709 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.709 21:34:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.709 21:34:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:35.709 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.709 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.709 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.709 21:34:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.709 21:34:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.709 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.709 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.709 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.709 21:34:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:35.709 21:34:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:35.709 21:34:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:35.709 21:34:24 -- host/auth.sh@44 -- # digest=sha256 00:31:35.709 21:34:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:35.709 21:34:24 -- host/auth.sh@44 -- # keyid=2 00:31:35.709 21:34:24 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:35.709 21:34:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:35.709 21:34:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:35.709 21:34:24 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:35.709 21:34:24 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:31:35.709 21:34:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:35.709 21:34:24 -- host/auth.sh@68 -- # digest=sha256 00:31:35.709 21:34:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:35.709 21:34:24 -- host/auth.sh@68 -- # keyid=2 00:31:35.709 21:34:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:35.709 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.710 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.710 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.710 21:34:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:35.710 21:34:24 -- nvmf/common.sh@717 -- # local ip 00:31:35.710 21:34:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:35.710 21:34:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:35.710 21:34:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.710 21:34:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.710 21:34:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:35.710 21:34:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.710 21:34:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:35.710 21:34:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:35.710 21:34:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:35.710 21:34:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:35.710 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.710 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.969 nvme0n1 00:31:35.969 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.969 21:34:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.969 21:34:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:35.969 21:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.969 21:34:24 -- common/autotest_common.sh@10 -- # set +x 00:31:35.969 21:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.969 21:34:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.969 21:34:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.969 21:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.969 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:35.969 21:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.969 21:34:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:35.969 21:34:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:35.969 21:34:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:35.969 21:34:25 -- host/auth.sh@44 -- # digest=sha256 00:31:35.969 21:34:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:35.969 21:34:25 -- host/auth.sh@44 -- # keyid=3 00:31:35.969 21:34:25 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:35.969 21:34:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:35.969 21:34:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:35.969 21:34:25 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:35.969 21:34:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:31:35.969 21:34:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:35.969 21:34:25 -- host/auth.sh@68 -- # digest=sha256 00:31:35.969 21:34:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:35.969 21:34:25 -- host/auth.sh@68 -- # keyid=3 00:31:35.969 21:34:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:35.969 21:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.969 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:35.969 21:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:35.969 21:34:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:35.969 21:34:25 -- nvmf/common.sh@717 -- # local ip 00:31:35.969 21:34:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:35.969 21:34:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:35.969 21:34:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.969 21:34:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.969 21:34:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:35.969 21:34:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.969 21:34:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:35.969 21:34:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:35.969 21:34:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:35.969 21:34:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:35.969 21:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:35.969 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.230 nvme0n1 00:31:36.230 21:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.230 21:34:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.230 21:34:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:36.230 21:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.230 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.230 21:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.230 21:34:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.230 21:34:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.230 21:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.230 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.230 21:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.230 21:34:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:36.230 21:34:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:36.230 21:34:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:36.230 21:34:25 -- host/auth.sh@44 -- # digest=sha256 00:31:36.230 21:34:25 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:36.230 21:34:25 -- host/auth.sh@44 -- # keyid=4 00:31:36.230 21:34:25 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:36.230 21:34:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:36.230 21:34:25 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:36.230 21:34:25 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:36.230 21:34:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:31:36.230 21:34:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:36.230 21:34:25 -- host/auth.sh@68 -- # digest=sha256 00:31:36.230 21:34:25 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:36.230 21:34:25 -- host/auth.sh@68 -- # keyid=4 00:31:36.230 21:34:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:36.230 21:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.230 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.230 21:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.230 21:34:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:36.230 21:34:25 -- nvmf/common.sh@717 -- # local ip 00:31:36.230 21:34:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:36.230 21:34:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:36.230 21:34:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.230 21:34:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.230 21:34:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:36.230 21:34:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.230 21:34:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:36.230 21:34:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:36.230 21:34:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:36.230 21:34:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:36.230 21:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.230 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.490 nvme0n1 00:31:36.490 21:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.490 21:34:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.490 21:34:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:36.490 21:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.490 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.490 21:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.490 21:34:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.490 21:34:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.490 21:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:36.490 21:34:25 -- common/autotest_common.sh@10 -- # set +x 00:31:36.490 21:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:36.490 21:34:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:36.490 21:34:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:36.490 21:34:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:36.490 21:34:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:36.490 21:34:25 -- host/auth.sh@44 -- # digest=sha256 00:31:36.490 21:34:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:36.490 21:34:25 -- host/auth.sh@44 -- # keyid=0 00:31:36.490 21:34:25 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:36.490 21:34:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:36.490 21:34:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:38.396 21:34:27 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:38.396 21:34:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:31:38.396 21:34:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:38.396 21:34:27 -- host/auth.sh@68 -- # digest=sha256 00:31:38.396 21:34:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:38.396 21:34:27 -- host/auth.sh@68 -- # keyid=0 00:31:38.396 21:34:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:38.396 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.396 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.396 21:34:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.396 21:34:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:38.396 21:34:27 -- nvmf/common.sh@717 -- # local ip 00:31:38.396 21:34:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:38.396 21:34:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:38.396 21:34:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.396 21:34:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.396 21:34:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:38.396 21:34:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.396 21:34:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:38.396 21:34:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:38.396 21:34:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:38.396 21:34:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:38.396 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.396 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.396 nvme0n1 00:31:38.396 21:34:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.396 21:34:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.396 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.396 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.396 21:34:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:38.396 21:34:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.396 21:34:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.396 21:34:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.396 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.396 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.396 21:34:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.396 21:34:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:38.396 21:34:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:38.396 21:34:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:38.396 21:34:27 -- host/auth.sh@44 -- # digest=sha256 00:31:38.396 21:34:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:38.396 21:34:27 -- host/auth.sh@44 -- # keyid=1 00:31:38.396 21:34:27 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:38.396 21:34:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:38.396 21:34:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:38.396 21:34:27 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:38.396 21:34:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:31:38.396 21:34:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:38.396 21:34:27 -- host/auth.sh@68 -- # digest=sha256 00:31:38.396 21:34:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:38.396 21:34:27 -- host/auth.sh@68 -- # keyid=1 00:31:38.396 21:34:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:38.396 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.396 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.396 21:34:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.396 21:34:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:38.396 21:34:27 -- nvmf/common.sh@717 -- # local ip 00:31:38.396 21:34:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:38.396 21:34:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:38.396 21:34:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.396 21:34:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.396 21:34:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:38.396 21:34:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.397 21:34:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:38.397 21:34:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:38.397 21:34:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:38.397 21:34:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:38.397 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.397 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.656 nvme0n1 00:31:38.656 21:34:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.656 21:34:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:38.656 21:34:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:38.656 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.656 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.946 21:34:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.946 21:34:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:38.946 21:34:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:38.946 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.946 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.946 21:34:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.946 21:34:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:38.946 21:34:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:38.946 21:34:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:38.946 21:34:27 -- host/auth.sh@44 -- # digest=sha256 00:31:38.946 21:34:27 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:38.946 21:34:27 -- host/auth.sh@44 -- # keyid=2 00:31:38.946 21:34:27 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:38.946 21:34:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:38.946 21:34:27 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:38.946 21:34:27 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:38.946 21:34:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:31:38.946 21:34:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:38.946 21:34:27 -- host/auth.sh@68 -- # digest=sha256 00:31:38.946 21:34:27 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:38.946 21:34:27 -- host/auth.sh@68 -- # keyid=2 00:31:38.946 21:34:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:38.946 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.946 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:38.946 21:34:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.946 21:34:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:38.946 21:34:27 -- nvmf/common.sh@717 -- # local ip 00:31:38.946 21:34:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:38.946 21:34:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:38.946 21:34:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:38.946 21:34:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:38.946 21:34:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:38.946 21:34:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:38.946 21:34:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:38.946 21:34:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:38.946 21:34:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:38.946 21:34:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:38.946 21:34:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.946 21:34:27 -- common/autotest_common.sh@10 -- # set +x 00:31:39.213 nvme0n1 00:31:39.213 21:34:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.213 21:34:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.213 21:34:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:39.213 21:34:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.213 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:31:39.213 21:34:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.213 21:34:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.213 21:34:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.213 21:34:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.213 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:31:39.213 21:34:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.213 21:34:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:39.213 21:34:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:39.213 21:34:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:39.213 21:34:28 -- host/auth.sh@44 -- # digest=sha256 00:31:39.213 21:34:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:39.213 21:34:28 -- host/auth.sh@44 -- # keyid=3 00:31:39.213 21:34:28 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:39.213 21:34:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:39.213 21:34:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:39.213 21:34:28 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:39.213 21:34:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:31:39.213 21:34:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:39.213 21:34:28 -- host/auth.sh@68 -- # digest=sha256 00:31:39.213 21:34:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:39.213 21:34:28 -- host/auth.sh@68 -- # keyid=3 00:31:39.213 21:34:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:39.213 21:34:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.213 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:31:39.213 21:34:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.213 21:34:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:39.213 21:34:28 -- nvmf/common.sh@717 -- # local ip 00:31:39.213 21:34:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:39.213 21:34:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:39.213 21:34:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.213 21:34:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.213 21:34:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:39.213 21:34:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.213 21:34:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:39.213 21:34:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:39.213 21:34:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:39.213 21:34:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:39.213 21:34:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.213 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:31:39.781 nvme0n1 00:31:39.781 21:34:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.781 21:34:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.781 21:34:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.781 21:34:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:39.781 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:31:39.781 21:34:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.781 21:34:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.781 21:34:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.781 21:34:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.781 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:31:39.781 21:34:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.781 21:34:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:39.781 21:34:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:39.781 21:34:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:39.781 21:34:28 -- host/auth.sh@44 -- # digest=sha256 00:31:39.781 21:34:28 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:39.781 21:34:28 -- host/auth.sh@44 -- # keyid=4 00:31:39.781 21:34:28 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:39.781 21:34:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:39.781 21:34:28 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:39.781 21:34:28 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:39.781 21:34:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:31:39.781 21:34:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:39.781 21:34:28 -- host/auth.sh@68 -- # digest=sha256 00:31:39.781 21:34:28 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:39.781 21:34:28 -- host/auth.sh@68 -- # keyid=4 00:31:39.781 21:34:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:39.781 21:34:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.781 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:31:39.781 21:34:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:39.781 21:34:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:39.781 21:34:28 -- nvmf/common.sh@717 -- # local ip 00:31:39.781 21:34:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:39.781 21:34:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:39.781 21:34:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.781 21:34:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.781 21:34:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:39.781 21:34:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.781 21:34:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:39.781 21:34:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:39.781 21:34:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:39.781 21:34:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:39.781 21:34:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:39.781 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:31:40.040 nvme0n1 00:31:40.040 21:34:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:40.040 21:34:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.040 21:34:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:40.040 21:34:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:40.040 21:34:29 -- common/autotest_common.sh@10 -- # set +x 00:31:40.040 21:34:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:40.040 21:34:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.040 21:34:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.040 21:34:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:40.040 21:34:29 -- common/autotest_common.sh@10 -- # set +x 00:31:40.040 21:34:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:40.040 21:34:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:40.040 21:34:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:40.040 21:34:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:40.040 21:34:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:40.040 21:34:29 -- host/auth.sh@44 -- # digest=sha256 00:31:40.040 21:34:29 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:40.040 21:34:29 -- host/auth.sh@44 -- # keyid=0 00:31:40.040 21:34:29 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:40.040 21:34:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:40.040 21:34:29 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:43.336 21:34:32 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:43.336 21:34:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:31:43.336 21:34:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:43.336 21:34:32 -- host/auth.sh@68 -- # digest=sha256 00:31:43.336 21:34:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:43.336 21:34:32 -- host/auth.sh@68 -- # keyid=0 00:31:43.336 21:34:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:43.336 21:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.336 21:34:32 -- common/autotest_common.sh@10 -- # set +x 00:31:43.336 21:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:43.336 21:34:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:43.336 21:34:32 -- nvmf/common.sh@717 -- # local ip 00:31:43.336 21:34:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:43.336 21:34:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:43.336 21:34:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.336 21:34:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.336 21:34:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:43.336 21:34:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.336 21:34:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:43.336 21:34:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:43.336 21:34:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:43.336 21:34:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:43.336 21:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.336 21:34:32 -- common/autotest_common.sh@10 -- # set +x 00:31:43.915 nvme0n1 00:31:43.915 21:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:43.915 21:34:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:43.915 21:34:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.915 21:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.915 21:34:32 -- common/autotest_common.sh@10 -- # set +x 00:31:43.915 21:34:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:43.915 21:34:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.915 21:34:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.915 21:34:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.915 21:34:32 -- common/autotest_common.sh@10 -- # set +x 00:31:43.915 21:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:43.915 21:34:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:43.915 21:34:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:43.915 21:34:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:43.915 21:34:33 -- host/auth.sh@44 -- # digest=sha256 00:31:43.915 21:34:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:43.915 21:34:33 -- host/auth.sh@44 -- # keyid=1 00:31:43.915 21:34:33 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:43.915 21:34:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:43.915 21:34:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:43.915 21:34:33 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:43.915 21:34:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:31:43.915 21:34:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:43.915 21:34:33 -- host/auth.sh@68 -- # digest=sha256 00:31:43.915 21:34:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:43.915 21:34:33 -- host/auth.sh@68 -- # keyid=1 00:31:43.915 21:34:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:43.915 21:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.915 21:34:33 -- common/autotest_common.sh@10 -- # set +x 00:31:43.915 21:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:43.915 21:34:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:43.915 21:34:33 -- nvmf/common.sh@717 -- # local ip 00:31:43.915 21:34:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:43.915 21:34:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:43.915 21:34:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.915 21:34:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.915 21:34:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:43.915 21:34:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.915 21:34:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:43.915 21:34:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:43.915 21:34:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:43.915 21:34:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:43.915 21:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:43.915 21:34:33 -- common/autotest_common.sh@10 -- # set +x 00:31:44.481 nvme0n1 00:31:44.481 21:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.481 21:34:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.481 21:34:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:44.481 21:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.481 21:34:33 -- common/autotest_common.sh@10 -- # set +x 00:31:44.481 21:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.481 21:34:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.481 21:34:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.481 21:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.482 21:34:33 -- common/autotest_common.sh@10 -- # set +x 00:31:44.482 21:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.482 21:34:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:44.482 21:34:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:44.482 21:34:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:44.482 21:34:33 -- host/auth.sh@44 -- # digest=sha256 00:31:44.482 21:34:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:44.482 21:34:33 -- host/auth.sh@44 -- # keyid=2 00:31:44.482 21:34:33 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:44.482 21:34:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:44.482 21:34:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:44.482 21:34:33 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:44.482 21:34:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:31:44.482 21:34:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:44.482 21:34:33 -- host/auth.sh@68 -- # digest=sha256 00:31:44.482 21:34:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:44.482 21:34:33 -- host/auth.sh@68 -- # keyid=2 00:31:44.482 21:34:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:44.482 21:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.482 21:34:33 -- common/autotest_common.sh@10 -- # set +x 00:31:44.482 21:34:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.482 21:34:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:44.482 21:34:33 -- nvmf/common.sh@717 -- # local ip 00:31:44.482 21:34:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:44.482 21:34:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:44.482 21:34:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.482 21:34:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.482 21:34:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:44.482 21:34:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.482 21:34:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:44.482 21:34:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:44.482 21:34:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:44.482 21:34:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:44.482 21:34:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.482 21:34:33 -- common/autotest_common.sh@10 -- # set +x 00:31:45.049 nvme0n1 00:31:45.049 21:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.049 21:34:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:45.049 21:34:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.049 21:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.049 21:34:34 -- common/autotest_common.sh@10 -- # set +x 00:31:45.049 21:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.049 21:34:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.049 21:34:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.049 21:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.049 21:34:34 -- common/autotest_common.sh@10 -- # set +x 00:31:45.049 21:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.049 21:34:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:45.049 21:34:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:45.049 21:34:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:45.049 21:34:34 -- host/auth.sh@44 -- # digest=sha256 00:31:45.049 21:34:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.049 21:34:34 -- host/auth.sh@44 -- # keyid=3 00:31:45.049 21:34:34 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:45.049 21:34:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:45.049 21:34:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:45.049 21:34:34 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:45.049 21:34:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:31:45.049 21:34:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:45.049 21:34:34 -- host/auth.sh@68 -- # digest=sha256 00:31:45.049 21:34:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:45.049 21:34:34 -- host/auth.sh@68 -- # keyid=3 00:31:45.049 21:34:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:45.049 21:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.049 21:34:34 -- common/autotest_common.sh@10 -- # set +x 00:31:45.049 21:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.049 21:34:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:45.049 21:34:34 -- nvmf/common.sh@717 -- # local ip 00:31:45.049 21:34:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:45.049 21:34:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:45.049 21:34:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.049 21:34:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.049 21:34:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:45.049 21:34:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.049 21:34:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:45.049 21:34:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:45.049 21:34:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:45.049 21:34:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:45.049 21:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.049 21:34:34 -- common/autotest_common.sh@10 -- # set +x 00:31:45.615 nvme0n1 00:31:45.615 21:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.615 21:34:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.615 21:34:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:45.615 21:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.615 21:34:34 -- common/autotest_common.sh@10 -- # set +x 00:31:45.615 21:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.874 21:34:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.874 21:34:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.874 21:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.874 21:34:34 -- common/autotest_common.sh@10 -- # set +x 00:31:45.874 21:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.874 21:34:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:45.874 21:34:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:45.874 21:34:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:45.874 21:34:34 -- host/auth.sh@44 -- # digest=sha256 00:31:45.874 21:34:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:45.874 21:34:34 -- host/auth.sh@44 -- # keyid=4 00:31:45.874 21:34:34 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:45.874 21:34:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:31:45.874 21:34:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:45.874 21:34:34 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:45.874 21:34:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:31:45.874 21:34:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:45.874 21:34:34 -- host/auth.sh@68 -- # digest=sha256 00:31:45.874 21:34:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:45.874 21:34:34 -- host/auth.sh@68 -- # keyid=4 00:31:45.874 21:34:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:45.874 21:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.874 21:34:34 -- common/autotest_common.sh@10 -- # set +x 00:31:45.874 21:34:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.874 21:34:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:45.874 21:34:34 -- nvmf/common.sh@717 -- # local ip 00:31:45.874 21:34:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:45.874 21:34:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:45.874 21:34:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.874 21:34:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.874 21:34:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:45.874 21:34:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.874 21:34:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:45.874 21:34:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:45.874 21:34:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:45.874 21:34:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:45.874 21:34:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.874 21:34:34 -- common/autotest_common.sh@10 -- # set +x 00:31:46.442 nvme0n1 00:31:46.442 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.442 21:34:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.442 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.442 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.442 21:34:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:46.442 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.442 21:34:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.442 21:34:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.442 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.442 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.442 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.442 21:34:35 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:31:46.442 21:34:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:46.442 21:34:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:46.442 21:34:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:46.442 21:34:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:46.442 21:34:35 -- host/auth.sh@44 -- # digest=sha384 00:31:46.442 21:34:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:46.442 21:34:35 -- host/auth.sh@44 -- # keyid=0 00:31:46.442 21:34:35 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:46.442 21:34:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:46.442 21:34:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:46.442 21:34:35 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:46.442 21:34:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:31:46.442 21:34:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:46.442 21:34:35 -- host/auth.sh@68 -- # digest=sha384 00:31:46.442 21:34:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:46.442 21:34:35 -- host/auth.sh@68 -- # keyid=0 00:31:46.442 21:34:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:46.442 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.442 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.442 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.442 21:34:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:46.442 21:34:35 -- nvmf/common.sh@717 -- # local ip 00:31:46.442 21:34:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:46.442 21:34:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:46.442 21:34:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.442 21:34:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.442 21:34:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:46.442 21:34:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.442 21:34:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:46.442 21:34:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:46.442 21:34:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:46.442 21:34:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:46.442 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.442 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.442 nvme0n1 00:31:46.442 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.442 21:34:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.442 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.442 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.442 21:34:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:46.700 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.700 21:34:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.700 21:34:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.700 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.700 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.700 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.700 21:34:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:46.700 21:34:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:46.700 21:34:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:46.700 21:34:35 -- host/auth.sh@44 -- # digest=sha384 00:31:46.700 21:34:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:46.700 21:34:35 -- host/auth.sh@44 -- # keyid=1 00:31:46.700 21:34:35 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:46.700 21:34:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:46.701 21:34:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:46.701 21:34:35 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:46.701 21:34:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:31:46.701 21:34:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:46.701 21:34:35 -- host/auth.sh@68 -- # digest=sha384 00:31:46.701 21:34:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:46.701 21:34:35 -- host/auth.sh@68 -- # keyid=1 00:31:46.701 21:34:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:46.701 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.701 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.701 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.701 21:34:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:46.701 21:34:35 -- nvmf/common.sh@717 -- # local ip 00:31:46.701 21:34:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:46.701 21:34:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:46.701 21:34:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.701 21:34:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.701 21:34:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:46.701 21:34:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.701 21:34:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:46.701 21:34:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:46.701 21:34:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:46.701 21:34:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:46.701 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.701 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.701 nvme0n1 00:31:46.701 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.701 21:34:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.701 21:34:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:46.701 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.701 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.701 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.701 21:34:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.701 21:34:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.701 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.701 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.701 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.701 21:34:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:46.701 21:34:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:46.701 21:34:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:46.701 21:34:35 -- host/auth.sh@44 -- # digest=sha384 00:31:46.701 21:34:35 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:46.701 21:34:35 -- host/auth.sh@44 -- # keyid=2 00:31:46.701 21:34:35 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:46.701 21:34:35 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:46.701 21:34:35 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:46.701 21:34:35 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:46.701 21:34:35 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:31:46.701 21:34:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:46.701 21:34:35 -- host/auth.sh@68 -- # digest=sha384 00:31:46.701 21:34:35 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:46.701 21:34:35 -- host/auth.sh@68 -- # keyid=2 00:31:46.701 21:34:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:46.701 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.701 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.701 21:34:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.701 21:34:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:46.701 21:34:35 -- nvmf/common.sh@717 -- # local ip 00:31:46.701 21:34:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:46.701 21:34:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:46.701 21:34:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.701 21:34:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.701 21:34:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:46.701 21:34:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.701 21:34:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:46.701 21:34:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:46.701 21:34:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:46.701 21:34:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:46.701 21:34:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.701 21:34:35 -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 nvme0n1 00:31:46.959 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.959 21:34:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.959 21:34:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:46.959 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.959 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.959 21:34:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.959 21:34:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.959 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.959 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.959 21:34:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:46.959 21:34:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:46.959 21:34:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:46.959 21:34:36 -- host/auth.sh@44 -- # digest=sha384 00:31:46.959 21:34:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:46.959 21:34:36 -- host/auth.sh@44 -- # keyid=3 00:31:46.959 21:34:36 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:46.959 21:34:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:46.959 21:34:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:46.959 21:34:36 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:46.959 21:34:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:31:46.959 21:34:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:46.959 21:34:36 -- host/auth.sh@68 -- # digest=sha384 00:31:46.959 21:34:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:46.959 21:34:36 -- host/auth.sh@68 -- # keyid=3 00:31:46.959 21:34:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:46.959 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.959 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.959 21:34:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:46.959 21:34:36 -- nvmf/common.sh@717 -- # local ip 00:31:46.959 21:34:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:46.959 21:34:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:46.959 21:34:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.959 21:34:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.959 21:34:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:46.959 21:34:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.959 21:34:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:46.959 21:34:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:46.959 21:34:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:46.959 21:34:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:46.959 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.959 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 nvme0n1 00:31:46.959 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.959 21:34:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.959 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.959 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:46.959 21:34:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:46.959 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.217 21:34:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.217 21:34:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.217 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.217 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.217 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.217 21:34:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:47.217 21:34:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:47.217 21:34:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:47.217 21:34:36 -- host/auth.sh@44 -- # digest=sha384 00:31:47.217 21:34:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:47.217 21:34:36 -- host/auth.sh@44 -- # keyid=4 00:31:47.217 21:34:36 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:47.217 21:34:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:47.217 21:34:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:47.217 21:34:36 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:47.217 21:34:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:31:47.217 21:34:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:47.217 21:34:36 -- host/auth.sh@68 -- # digest=sha384 00:31:47.217 21:34:36 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:47.217 21:34:36 -- host/auth.sh@68 -- # keyid=4 00:31:47.217 21:34:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:47.217 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.217 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.217 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.217 21:34:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:47.217 21:34:36 -- nvmf/common.sh@717 -- # local ip 00:31:47.217 21:34:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:47.217 21:34:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:47.217 21:34:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.217 21:34:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.217 21:34:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:47.217 21:34:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.217 21:34:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:47.217 21:34:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:47.217 21:34:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:47.217 21:34:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.217 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.217 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.217 nvme0n1 00:31:47.217 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.217 21:34:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:47.217 21:34:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.217 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.217 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.217 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.217 21:34:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.217 21:34:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.217 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.217 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.217 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.217 21:34:36 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:47.217 21:34:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:47.217 21:34:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:47.217 21:34:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:47.217 21:34:36 -- host/auth.sh@44 -- # digest=sha384 00:31:47.217 21:34:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:47.217 21:34:36 -- host/auth.sh@44 -- # keyid=0 00:31:47.217 21:34:36 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:47.217 21:34:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:47.217 21:34:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:47.217 21:34:36 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:47.218 21:34:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:31:47.218 21:34:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:47.218 21:34:36 -- host/auth.sh@68 -- # digest=sha384 00:31:47.218 21:34:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:47.218 21:34:36 -- host/auth.sh@68 -- # keyid=0 00:31:47.218 21:34:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:47.218 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.218 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.218 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.218 21:34:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:47.218 21:34:36 -- nvmf/common.sh@717 -- # local ip 00:31:47.218 21:34:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:47.218 21:34:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:47.218 21:34:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.218 21:34:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.218 21:34:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:47.218 21:34:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.218 21:34:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:47.218 21:34:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:47.218 21:34:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:47.218 21:34:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:47.218 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.218 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.476 nvme0n1 00:31:47.476 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.476 21:34:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.476 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.476 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.476 21:34:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:47.476 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.476 21:34:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.476 21:34:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.476 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.476 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.476 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.476 21:34:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:47.476 21:34:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:47.476 21:34:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:47.476 21:34:36 -- host/auth.sh@44 -- # digest=sha384 00:31:47.476 21:34:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:47.476 21:34:36 -- host/auth.sh@44 -- # keyid=1 00:31:47.476 21:34:36 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:47.476 21:34:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:47.476 21:34:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:47.476 21:34:36 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:47.476 21:34:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:31:47.476 21:34:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:47.476 21:34:36 -- host/auth.sh@68 -- # digest=sha384 00:31:47.476 21:34:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:47.476 21:34:36 -- host/auth.sh@68 -- # keyid=1 00:31:47.476 21:34:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:47.476 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.476 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.476 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.476 21:34:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:47.476 21:34:36 -- nvmf/common.sh@717 -- # local ip 00:31:47.476 21:34:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:47.476 21:34:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:47.476 21:34:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.476 21:34:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.476 21:34:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:47.476 21:34:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.476 21:34:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:47.476 21:34:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:47.476 21:34:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:47.476 21:34:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:47.476 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.476 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.735 nvme0n1 00:31:47.735 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.735 21:34:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.735 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.735 21:34:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:47.735 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.735 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.735 21:34:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.735 21:34:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.735 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.735 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.735 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.735 21:34:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:47.735 21:34:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:47.735 21:34:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:47.735 21:34:36 -- host/auth.sh@44 -- # digest=sha384 00:31:47.735 21:34:36 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:47.735 21:34:36 -- host/auth.sh@44 -- # keyid=2 00:31:47.735 21:34:36 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:47.735 21:34:36 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:47.735 21:34:36 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:47.735 21:34:36 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:47.735 21:34:36 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:31:47.735 21:34:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:47.735 21:34:36 -- host/auth.sh@68 -- # digest=sha384 00:31:47.735 21:34:36 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:47.735 21:34:36 -- host/auth.sh@68 -- # keyid=2 00:31:47.735 21:34:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:47.735 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.735 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.735 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.735 21:34:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:47.735 21:34:36 -- nvmf/common.sh@717 -- # local ip 00:31:47.735 21:34:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:47.735 21:34:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:47.735 21:34:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.735 21:34:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.735 21:34:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:47.735 21:34:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.735 21:34:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:47.735 21:34:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:47.735 21:34:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:47.735 21:34:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:47.735 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.735 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.735 nvme0n1 00:31:47.735 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.735 21:34:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:47.735 21:34:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.735 21:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.735 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:31:47.735 21:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.995 21:34:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.995 21:34:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.995 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.995 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:47.995 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.995 21:34:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:47.995 21:34:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:47.995 21:34:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:47.995 21:34:37 -- host/auth.sh@44 -- # digest=sha384 00:31:47.995 21:34:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:47.995 21:34:37 -- host/auth.sh@44 -- # keyid=3 00:31:47.995 21:34:37 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:47.995 21:34:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:47.995 21:34:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:47.995 21:34:37 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:47.995 21:34:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:31:47.995 21:34:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:47.995 21:34:37 -- host/auth.sh@68 -- # digest=sha384 00:31:47.995 21:34:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:47.995 21:34:37 -- host/auth.sh@68 -- # keyid=3 00:31:47.995 21:34:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:47.995 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.995 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:47.995 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.995 21:34:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:47.995 21:34:37 -- nvmf/common.sh@717 -- # local ip 00:31:47.995 21:34:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:47.995 21:34:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:47.995 21:34:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.995 21:34:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.995 21:34:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:47.995 21:34:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.995 21:34:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:47.995 21:34:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:47.995 21:34:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:47.995 21:34:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:47.995 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.995 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:47.995 nvme0n1 00:31:47.995 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.995 21:34:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.995 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.995 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:47.995 21:34:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:47.995 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.995 21:34:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.995 21:34:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.995 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.995 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:47.995 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.995 21:34:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:47.995 21:34:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:47.995 21:34:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:47.995 21:34:37 -- host/auth.sh@44 -- # digest=sha384 00:31:47.995 21:34:37 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:47.995 21:34:37 -- host/auth.sh@44 -- # keyid=4 00:31:47.995 21:34:37 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:47.995 21:34:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:47.995 21:34:37 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:47.995 21:34:37 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:47.995 21:34:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:31:47.995 21:34:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:47.995 21:34:37 -- host/auth.sh@68 -- # digest=sha384 00:31:47.995 21:34:37 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:47.995 21:34:37 -- host/auth.sh@68 -- # keyid=4 00:31:47.995 21:34:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:47.995 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.995 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:47.995 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:47.995 21:34:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:47.995 21:34:37 -- nvmf/common.sh@717 -- # local ip 00:31:47.995 21:34:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:47.995 21:34:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:47.995 21:34:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.995 21:34:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.995 21:34:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:47.995 21:34:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.995 21:34:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:47.995 21:34:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:47.995 21:34:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:47.995 21:34:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:47.995 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:47.995 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.253 nvme0n1 00:31:48.253 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.253 21:34:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.253 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.253 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.253 21:34:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:48.253 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.253 21:34:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.253 21:34:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.253 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.253 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.253 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.253 21:34:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:48.253 21:34:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:48.253 21:34:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:48.253 21:34:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:48.253 21:34:37 -- host/auth.sh@44 -- # digest=sha384 00:31:48.253 21:34:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:48.253 21:34:37 -- host/auth.sh@44 -- # keyid=0 00:31:48.253 21:34:37 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:48.253 21:34:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:48.253 21:34:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:48.253 21:34:37 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:48.253 21:34:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:31:48.253 21:34:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:48.253 21:34:37 -- host/auth.sh@68 -- # digest=sha384 00:31:48.253 21:34:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:48.253 21:34:37 -- host/auth.sh@68 -- # keyid=0 00:31:48.253 21:34:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:48.253 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.253 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.253 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.253 21:34:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:48.253 21:34:37 -- nvmf/common.sh@717 -- # local ip 00:31:48.253 21:34:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:48.253 21:34:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:48.253 21:34:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.253 21:34:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.253 21:34:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:48.253 21:34:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.253 21:34:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:48.253 21:34:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:48.253 21:34:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:48.253 21:34:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:48.253 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.253 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.511 nvme0n1 00:31:48.511 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.511 21:34:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.511 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.511 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.511 21:34:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:48.511 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.511 21:34:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.511 21:34:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.511 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.511 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.511 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.511 21:34:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:48.511 21:34:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:48.511 21:34:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:48.511 21:34:37 -- host/auth.sh@44 -- # digest=sha384 00:31:48.511 21:34:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:48.511 21:34:37 -- host/auth.sh@44 -- # keyid=1 00:31:48.511 21:34:37 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:48.511 21:34:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:48.511 21:34:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:48.511 21:34:37 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:48.511 21:34:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:31:48.511 21:34:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:48.511 21:34:37 -- host/auth.sh@68 -- # digest=sha384 00:31:48.511 21:34:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:48.511 21:34:37 -- host/auth.sh@68 -- # keyid=1 00:31:48.511 21:34:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:48.511 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.511 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.511 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.511 21:34:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:48.511 21:34:37 -- nvmf/common.sh@717 -- # local ip 00:31:48.511 21:34:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:48.511 21:34:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:48.511 21:34:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.511 21:34:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.511 21:34:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:48.511 21:34:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.511 21:34:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:48.511 21:34:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:48.511 21:34:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:48.511 21:34:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:48.511 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.511 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.770 nvme0n1 00:31:48.770 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.770 21:34:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.770 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.770 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.770 21:34:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:48.770 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.770 21:34:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.770 21:34:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.770 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.770 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.770 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.770 21:34:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:48.770 21:34:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:48.770 21:34:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:48.770 21:34:37 -- host/auth.sh@44 -- # digest=sha384 00:31:48.770 21:34:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:48.770 21:34:37 -- host/auth.sh@44 -- # keyid=2 00:31:48.770 21:34:37 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:48.770 21:34:37 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:48.770 21:34:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:48.770 21:34:37 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:48.770 21:34:37 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:31:48.770 21:34:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:48.770 21:34:37 -- host/auth.sh@68 -- # digest=sha384 00:31:48.770 21:34:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:48.770 21:34:37 -- host/auth.sh@68 -- # keyid=2 00:31:48.770 21:34:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:48.770 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.770 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:48.770 21:34:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:48.770 21:34:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:48.770 21:34:37 -- nvmf/common.sh@717 -- # local ip 00:31:48.770 21:34:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:48.770 21:34:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:48.770 21:34:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.770 21:34:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.770 21:34:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:48.770 21:34:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.770 21:34:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:48.770 21:34:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:48.770 21:34:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:48.770 21:34:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:48.770 21:34:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:48.770 21:34:37 -- common/autotest_common.sh@10 -- # set +x 00:31:49.028 nvme0n1 00:31:49.028 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.028 21:34:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.028 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.028 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.028 21:34:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:49.028 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.028 21:34:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.028 21:34:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.028 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.028 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.028 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.028 21:34:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:49.028 21:34:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:49.028 21:34:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:49.028 21:34:38 -- host/auth.sh@44 -- # digest=sha384 00:31:49.028 21:34:38 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.028 21:34:38 -- host/auth.sh@44 -- # keyid=3 00:31:49.029 21:34:38 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:49.029 21:34:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:49.029 21:34:38 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:49.029 21:34:38 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:49.029 21:34:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:31:49.029 21:34:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:49.029 21:34:38 -- host/auth.sh@68 -- # digest=sha384 00:31:49.029 21:34:38 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:49.029 21:34:38 -- host/auth.sh@68 -- # keyid=3 00:31:49.029 21:34:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:49.029 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.029 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.029 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.029 21:34:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:49.029 21:34:38 -- nvmf/common.sh@717 -- # local ip 00:31:49.029 21:34:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:49.029 21:34:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:49.029 21:34:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.029 21:34:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.029 21:34:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:49.029 21:34:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.029 21:34:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:49.029 21:34:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:49.029 21:34:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:49.029 21:34:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:49.029 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.029 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.287 nvme0n1 00:31:49.287 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.287 21:34:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.287 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.287 21:34:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:49.287 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.287 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.287 21:34:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.287 21:34:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.287 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.287 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.287 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.287 21:34:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:49.287 21:34:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:49.287 21:34:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:49.287 21:34:38 -- host/auth.sh@44 -- # digest=sha384 00:31:49.287 21:34:38 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:49.287 21:34:38 -- host/auth.sh@44 -- # keyid=4 00:31:49.287 21:34:38 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:49.287 21:34:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:49.287 21:34:38 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:49.287 21:34:38 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:49.287 21:34:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:31:49.287 21:34:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:49.287 21:34:38 -- host/auth.sh@68 -- # digest=sha384 00:31:49.287 21:34:38 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:49.287 21:34:38 -- host/auth.sh@68 -- # keyid=4 00:31:49.287 21:34:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:49.287 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.287 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.287 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.287 21:34:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:49.287 21:34:38 -- nvmf/common.sh@717 -- # local ip 00:31:49.287 21:34:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:49.287 21:34:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:49.287 21:34:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.287 21:34:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.287 21:34:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:49.287 21:34:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.287 21:34:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:49.287 21:34:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:49.287 21:34:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:49.287 21:34:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:49.287 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.287 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.546 nvme0n1 00:31:49.546 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.546 21:34:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.546 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.546 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.546 21:34:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:49.546 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.546 21:34:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.546 21:34:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.546 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.546 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.546 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.546 21:34:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:49.546 21:34:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:49.546 21:34:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:49.546 21:34:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:49.546 21:34:38 -- host/auth.sh@44 -- # digest=sha384 00:31:49.546 21:34:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:49.546 21:34:38 -- host/auth.sh@44 -- # keyid=0 00:31:49.546 21:34:38 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:49.546 21:34:38 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:49.546 21:34:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:49.546 21:34:38 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:49.546 21:34:38 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:31:49.546 21:34:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:49.546 21:34:38 -- host/auth.sh@68 -- # digest=sha384 00:31:49.546 21:34:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:49.546 21:34:38 -- host/auth.sh@68 -- # keyid=0 00:31:49.546 21:34:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:49.546 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.546 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:49.546 21:34:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:49.546 21:34:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:49.546 21:34:38 -- nvmf/common.sh@717 -- # local ip 00:31:49.546 21:34:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:49.546 21:34:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:49.546 21:34:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.546 21:34:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.546 21:34:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:49.546 21:34:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.546 21:34:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:49.546 21:34:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:49.546 21:34:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:49.546 21:34:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:49.546 21:34:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:49.546 21:34:38 -- common/autotest_common.sh@10 -- # set +x 00:31:50.113 nvme0n1 00:31:50.113 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.113 21:34:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.113 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.113 21:34:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:50.113 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.113 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.113 21:34:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.113 21:34:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.113 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.113 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.113 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.113 21:34:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:50.113 21:34:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:50.113 21:34:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:50.113 21:34:39 -- host/auth.sh@44 -- # digest=sha384 00:31:50.113 21:34:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:50.113 21:34:39 -- host/auth.sh@44 -- # keyid=1 00:31:50.113 21:34:39 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:50.113 21:34:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:50.113 21:34:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:50.113 21:34:39 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:50.113 21:34:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:31:50.113 21:34:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:50.113 21:34:39 -- host/auth.sh@68 -- # digest=sha384 00:31:50.113 21:34:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:50.113 21:34:39 -- host/auth.sh@68 -- # keyid=1 00:31:50.113 21:34:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:50.113 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.113 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.113 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.113 21:34:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:50.113 21:34:39 -- nvmf/common.sh@717 -- # local ip 00:31:50.113 21:34:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:50.113 21:34:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:50.113 21:34:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.113 21:34:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.113 21:34:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:50.113 21:34:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.113 21:34:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:50.113 21:34:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:50.113 21:34:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:50.113 21:34:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:50.113 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.113 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.371 nvme0n1 00:31:50.371 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.371 21:34:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.371 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.371 21:34:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:50.371 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.371 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.371 21:34:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.371 21:34:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.371 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.371 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.371 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.371 21:34:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:50.371 21:34:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:50.371 21:34:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:50.371 21:34:39 -- host/auth.sh@44 -- # digest=sha384 00:31:50.371 21:34:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:50.371 21:34:39 -- host/auth.sh@44 -- # keyid=2 00:31:50.371 21:34:39 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:50.371 21:34:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:50.371 21:34:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:50.371 21:34:39 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:50.371 21:34:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:31:50.371 21:34:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:50.371 21:34:39 -- host/auth.sh@68 -- # digest=sha384 00:31:50.371 21:34:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:50.371 21:34:39 -- host/auth.sh@68 -- # keyid=2 00:31:50.371 21:34:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:50.371 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.371 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.371 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.371 21:34:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:50.371 21:34:39 -- nvmf/common.sh@717 -- # local ip 00:31:50.371 21:34:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:50.371 21:34:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:50.371 21:34:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.371 21:34:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.371 21:34:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:50.371 21:34:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.371 21:34:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:50.371 21:34:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:50.371 21:34:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:50.371 21:34:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:50.371 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.371 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.938 nvme0n1 00:31:50.938 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.938 21:34:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.938 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.938 21:34:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:50.938 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.938 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.938 21:34:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.938 21:34:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.938 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.938 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.938 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.938 21:34:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:50.938 21:34:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:50.938 21:34:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:50.938 21:34:39 -- host/auth.sh@44 -- # digest=sha384 00:31:50.938 21:34:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:50.938 21:34:39 -- host/auth.sh@44 -- # keyid=3 00:31:50.938 21:34:39 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:50.938 21:34:39 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:50.938 21:34:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:50.938 21:34:39 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:50.938 21:34:39 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:31:50.938 21:34:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:50.938 21:34:39 -- host/auth.sh@68 -- # digest=sha384 00:31:50.938 21:34:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:50.938 21:34:39 -- host/auth.sh@68 -- # keyid=3 00:31:50.938 21:34:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:50.938 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.938 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:50.938 21:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:50.938 21:34:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:50.938 21:34:39 -- nvmf/common.sh@717 -- # local ip 00:31:50.938 21:34:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:50.938 21:34:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:50.938 21:34:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.938 21:34:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.938 21:34:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:50.938 21:34:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.938 21:34:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:50.938 21:34:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:50.938 21:34:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:50.938 21:34:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:50.938 21:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:50.938 21:34:39 -- common/autotest_common.sh@10 -- # set +x 00:31:51.196 nvme0n1 00:31:51.196 21:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.196 21:34:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:51.196 21:34:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.196 21:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.196 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:31:51.196 21:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.196 21:34:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.196 21:34:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.196 21:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.196 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:31:51.196 21:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.196 21:34:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:51.196 21:34:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:51.196 21:34:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:51.196 21:34:40 -- host/auth.sh@44 -- # digest=sha384 00:31:51.196 21:34:40 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:51.196 21:34:40 -- host/auth.sh@44 -- # keyid=4 00:31:51.196 21:34:40 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:51.196 21:34:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:51.196 21:34:40 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:51.196 21:34:40 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:51.196 21:34:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:31:51.196 21:34:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:51.196 21:34:40 -- host/auth.sh@68 -- # digest=sha384 00:31:51.196 21:34:40 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:51.196 21:34:40 -- host/auth.sh@68 -- # keyid=4 00:31:51.196 21:34:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:51.196 21:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.196 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:31:51.196 21:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.196 21:34:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:51.196 21:34:40 -- nvmf/common.sh@717 -- # local ip 00:31:51.196 21:34:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:51.196 21:34:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:51.196 21:34:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.196 21:34:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.196 21:34:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:51.196 21:34:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.196 21:34:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:51.196 21:34:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:51.196 21:34:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:51.196 21:34:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:51.196 21:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.196 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:31:51.762 nvme0n1 00:31:51.762 21:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.762 21:34:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.762 21:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.762 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:31:51.762 21:34:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:51.762 21:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.762 21:34:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.762 21:34:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.762 21:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.762 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:31:51.762 21:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.762 21:34:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.762 21:34:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:51.762 21:34:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:51.762 21:34:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:51.762 21:34:40 -- host/auth.sh@44 -- # digest=sha384 00:31:51.762 21:34:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:51.762 21:34:40 -- host/auth.sh@44 -- # keyid=0 00:31:51.762 21:34:40 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:51.762 21:34:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:51.762 21:34:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:51.762 21:34:40 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:51.762 21:34:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:31:51.762 21:34:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:51.762 21:34:40 -- host/auth.sh@68 -- # digest=sha384 00:31:51.762 21:34:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:51.762 21:34:40 -- host/auth.sh@68 -- # keyid=0 00:31:51.762 21:34:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:51.762 21:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.762 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:31:51.762 21:34:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:51.762 21:34:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:51.762 21:34:40 -- nvmf/common.sh@717 -- # local ip 00:31:51.762 21:34:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:51.762 21:34:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:51.762 21:34:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.762 21:34:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.762 21:34:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:51.762 21:34:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.762 21:34:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:51.762 21:34:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:51.762 21:34:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:51.762 21:34:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:51.762 21:34:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:51.762 21:34:40 -- common/autotest_common.sh@10 -- # set +x 00:31:52.329 nvme0n1 00:31:52.329 21:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.329 21:34:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.329 21:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.329 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:31:52.329 21:34:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:52.329 21:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.329 21:34:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.329 21:34:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.329 21:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.329 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:31:52.330 21:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.330 21:34:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:52.330 21:34:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:52.330 21:34:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:52.330 21:34:41 -- host/auth.sh@44 -- # digest=sha384 00:31:52.330 21:34:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:52.330 21:34:41 -- host/auth.sh@44 -- # keyid=1 00:31:52.330 21:34:41 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:52.330 21:34:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:52.330 21:34:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:52.330 21:34:41 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:52.330 21:34:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:31:52.330 21:34:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:52.330 21:34:41 -- host/auth.sh@68 -- # digest=sha384 00:31:52.330 21:34:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:52.330 21:34:41 -- host/auth.sh@68 -- # keyid=1 00:31:52.330 21:34:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:52.330 21:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.330 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:31:52.330 21:34:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.330 21:34:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:52.330 21:34:41 -- nvmf/common.sh@717 -- # local ip 00:31:52.330 21:34:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:52.330 21:34:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:52.330 21:34:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.330 21:34:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.330 21:34:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:52.330 21:34:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.330 21:34:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:52.330 21:34:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:52.330 21:34:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:52.330 21:34:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:52.330 21:34:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.330 21:34:41 -- common/autotest_common.sh@10 -- # set +x 00:31:52.898 nvme0n1 00:31:52.898 21:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:52.898 21:34:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.898 21:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:52.898 21:34:42 -- common/autotest_common.sh@10 -- # set +x 00:31:52.898 21:34:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:52.898 21:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.156 21:34:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.156 21:34:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.156 21:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.156 21:34:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.156 21:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.156 21:34:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:53.156 21:34:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:53.156 21:34:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:53.156 21:34:42 -- host/auth.sh@44 -- # digest=sha384 00:31:53.156 21:34:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:53.156 21:34:42 -- host/auth.sh@44 -- # keyid=2 00:31:53.156 21:34:42 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:53.156 21:34:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:53.156 21:34:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:53.156 21:34:42 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:53.156 21:34:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:31:53.156 21:34:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:53.156 21:34:42 -- host/auth.sh@68 -- # digest=sha384 00:31:53.156 21:34:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:53.156 21:34:42 -- host/auth.sh@68 -- # keyid=2 00:31:53.156 21:34:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:53.156 21:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.156 21:34:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.156 21:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.156 21:34:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:53.156 21:34:42 -- nvmf/common.sh@717 -- # local ip 00:31:53.156 21:34:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:53.156 21:34:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:53.156 21:34:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.156 21:34:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.156 21:34:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:53.156 21:34:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.156 21:34:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:53.156 21:34:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:53.156 21:34:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:53.157 21:34:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:53.157 21:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.157 21:34:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.726 nvme0n1 00:31:53.726 21:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.726 21:34:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.726 21:34:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:53.726 21:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.726 21:34:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.726 21:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.726 21:34:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.726 21:34:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.726 21:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.726 21:34:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.726 21:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.726 21:34:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:53.726 21:34:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:53.726 21:34:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:53.726 21:34:42 -- host/auth.sh@44 -- # digest=sha384 00:31:53.726 21:34:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:53.726 21:34:42 -- host/auth.sh@44 -- # keyid=3 00:31:53.726 21:34:42 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:53.726 21:34:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:53.726 21:34:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:53.726 21:34:42 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:53.726 21:34:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:31:53.726 21:34:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:53.726 21:34:42 -- host/auth.sh@68 -- # digest=sha384 00:31:53.726 21:34:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:53.726 21:34:42 -- host/auth.sh@68 -- # keyid=3 00:31:53.726 21:34:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:53.726 21:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.726 21:34:42 -- common/autotest_common.sh@10 -- # set +x 00:31:53.726 21:34:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:53.726 21:34:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:53.726 21:34:42 -- nvmf/common.sh@717 -- # local ip 00:31:53.726 21:34:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:53.726 21:34:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:53.726 21:34:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.726 21:34:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.726 21:34:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:53.726 21:34:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.726 21:34:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:53.726 21:34:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:53.726 21:34:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:53.726 21:34:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:53.726 21:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:53.726 21:34:42 -- common/autotest_common.sh@10 -- # set +x 00:31:54.295 nvme0n1 00:31:54.295 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.295 21:34:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.295 21:34:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:54.295 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.295 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:31:54.295 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.295 21:34:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.295 21:34:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.295 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.295 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:31:54.295 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.295 21:34:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:54.295 21:34:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:54.295 21:34:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:54.295 21:34:43 -- host/auth.sh@44 -- # digest=sha384 00:31:54.295 21:34:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:54.295 21:34:43 -- host/auth.sh@44 -- # keyid=4 00:31:54.295 21:34:43 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:54.295 21:34:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:31:54.295 21:34:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:31:54.295 21:34:43 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:54.295 21:34:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:31:54.295 21:34:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:54.295 21:34:43 -- host/auth.sh@68 -- # digest=sha384 00:31:54.295 21:34:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:31:54.295 21:34:43 -- host/auth.sh@68 -- # keyid=4 00:31:54.295 21:34:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:54.295 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.295 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:31:54.554 21:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:54.554 21:34:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:54.554 21:34:43 -- nvmf/common.sh@717 -- # local ip 00:31:54.554 21:34:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:54.554 21:34:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:54.554 21:34:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.554 21:34:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.554 21:34:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:54.554 21:34:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.554 21:34:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:54.554 21:34:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:54.554 21:34:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:54.554 21:34:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:54.554 21:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:54.554 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:31:55.122 nvme0n1 00:31:55.122 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.122 21:34:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.122 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.122 21:34:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:55.122 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.122 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.122 21:34:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.122 21:34:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.122 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.122 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.122 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.122 21:34:44 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:31:55.122 21:34:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:55.122 21:34:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:55.122 21:34:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:55.122 21:34:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:55.122 21:34:44 -- host/auth.sh@44 -- # digest=sha512 00:31:55.122 21:34:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.122 21:34:44 -- host/auth.sh@44 -- # keyid=0 00:31:55.122 21:34:44 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:55.122 21:34:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:55.122 21:34:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:55.122 21:34:44 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:55.122 21:34:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:31:55.122 21:34:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:55.122 21:34:44 -- host/auth.sh@68 -- # digest=sha512 00:31:55.122 21:34:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:55.122 21:34:44 -- host/auth.sh@68 -- # keyid=0 00:31:55.122 21:34:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:55.122 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.122 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.122 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.122 21:34:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:55.122 21:34:44 -- nvmf/common.sh@717 -- # local ip 00:31:55.122 21:34:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:55.122 21:34:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:55.122 21:34:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.122 21:34:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.122 21:34:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:55.122 21:34:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.122 21:34:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:55.122 21:34:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:55.122 21:34:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:55.122 21:34:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:55.122 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.122 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.122 nvme0n1 00:31:55.122 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.122 21:34:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.122 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.122 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.122 21:34:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:55.123 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.123 21:34:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.123 21:34:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.123 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.123 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.123 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.123 21:34:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:55.123 21:34:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:55.123 21:34:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:55.123 21:34:44 -- host/auth.sh@44 -- # digest=sha512 00:31:55.123 21:34:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.123 21:34:44 -- host/auth.sh@44 -- # keyid=1 00:31:55.123 21:34:44 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:55.382 21:34:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:55.382 21:34:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:55.382 21:34:44 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:55.382 21:34:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:31:55.382 21:34:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:55.382 21:34:44 -- host/auth.sh@68 -- # digest=sha512 00:31:55.382 21:34:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:55.382 21:34:44 -- host/auth.sh@68 -- # keyid=1 00:31:55.382 21:34:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:55.382 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.382 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.382 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.382 21:34:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:55.382 21:34:44 -- nvmf/common.sh@717 -- # local ip 00:31:55.382 21:34:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:55.382 21:34:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:55.382 21:34:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.382 21:34:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.382 21:34:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:55.382 21:34:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.382 21:34:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:55.382 21:34:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:55.382 21:34:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:55.382 21:34:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:55.382 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.382 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.382 nvme0n1 00:31:55.382 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.382 21:34:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.382 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.382 21:34:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:55.382 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.382 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.382 21:34:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.382 21:34:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.382 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.382 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.382 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.382 21:34:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:55.382 21:34:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:55.382 21:34:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:55.382 21:34:44 -- host/auth.sh@44 -- # digest=sha512 00:31:55.382 21:34:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.382 21:34:44 -- host/auth.sh@44 -- # keyid=2 00:31:55.382 21:34:44 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:55.382 21:34:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:55.382 21:34:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:55.382 21:34:44 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:55.382 21:34:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:31:55.382 21:34:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:55.382 21:34:44 -- host/auth.sh@68 -- # digest=sha512 00:31:55.382 21:34:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:55.382 21:34:44 -- host/auth.sh@68 -- # keyid=2 00:31:55.382 21:34:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:55.382 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.382 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.382 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.382 21:34:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:55.382 21:34:44 -- nvmf/common.sh@717 -- # local ip 00:31:55.382 21:34:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:55.382 21:34:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:55.382 21:34:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.382 21:34:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.382 21:34:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:55.382 21:34:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.382 21:34:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:55.382 21:34:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:55.382 21:34:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:55.383 21:34:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:55.383 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.383 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.643 nvme0n1 00:31:55.643 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.643 21:34:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.643 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.643 21:34:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:55.643 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.643 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.643 21:34:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.643 21:34:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.643 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.643 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.643 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.643 21:34:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:55.643 21:34:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:55.643 21:34:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:55.643 21:34:44 -- host/auth.sh@44 -- # digest=sha512 00:31:55.643 21:34:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.643 21:34:44 -- host/auth.sh@44 -- # keyid=3 00:31:55.643 21:34:44 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:55.643 21:34:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:55.643 21:34:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:55.643 21:34:44 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:55.643 21:34:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:31:55.643 21:34:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:55.643 21:34:44 -- host/auth.sh@68 -- # digest=sha512 00:31:55.643 21:34:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:55.643 21:34:44 -- host/auth.sh@68 -- # keyid=3 00:31:55.643 21:34:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:55.643 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.643 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.643 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.643 21:34:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:55.643 21:34:44 -- nvmf/common.sh@717 -- # local ip 00:31:55.643 21:34:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:55.643 21:34:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:55.643 21:34:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.643 21:34:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.643 21:34:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:55.643 21:34:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.643 21:34:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:55.643 21:34:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:55.643 21:34:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:55.643 21:34:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:55.643 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.643 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.643 nvme0n1 00:31:55.643 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.643 21:34:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.643 21:34:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:55.643 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.643 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.643 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.904 21:34:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.904 21:34:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.904 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.904 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.904 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.904 21:34:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:55.904 21:34:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:55.904 21:34:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:55.904 21:34:44 -- host/auth.sh@44 -- # digest=sha512 00:31:55.904 21:34:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.904 21:34:44 -- host/auth.sh@44 -- # keyid=4 00:31:55.904 21:34:44 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:55.904 21:34:44 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:55.904 21:34:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:31:55.904 21:34:44 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:55.904 21:34:44 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:31:55.904 21:34:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:55.904 21:34:44 -- host/auth.sh@68 -- # digest=sha512 00:31:55.904 21:34:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:31:55.904 21:34:44 -- host/auth.sh@68 -- # keyid=4 00:31:55.904 21:34:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:55.904 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.904 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.904 21:34:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.904 21:34:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:55.904 21:34:44 -- nvmf/common.sh@717 -- # local ip 00:31:55.904 21:34:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:55.904 21:34:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:55.904 21:34:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.904 21:34:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.904 21:34:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:55.904 21:34:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.904 21:34:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:55.904 21:34:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:55.904 21:34:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:55.904 21:34:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:55.904 21:34:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.904 21:34:44 -- common/autotest_common.sh@10 -- # set +x 00:31:55.904 nvme0n1 00:31:55.904 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.904 21:34:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:55.904 21:34:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.904 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.904 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:55.904 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.904 21:34:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.904 21:34:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.904 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.904 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:55.904 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.904 21:34:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:55.904 21:34:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:55.904 21:34:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:55.904 21:34:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:55.904 21:34:45 -- host/auth.sh@44 -- # digest=sha512 00:31:55.904 21:34:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:55.904 21:34:45 -- host/auth.sh@44 -- # keyid=0 00:31:55.904 21:34:45 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:55.904 21:34:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:55.904 21:34:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:55.904 21:34:45 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:55.904 21:34:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:31:55.904 21:34:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:55.904 21:34:45 -- host/auth.sh@68 -- # digest=sha512 00:31:55.904 21:34:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:55.904 21:34:45 -- host/auth.sh@68 -- # keyid=0 00:31:55.904 21:34:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:55.904 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.904 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:55.904 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:55.904 21:34:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:55.904 21:34:45 -- nvmf/common.sh@717 -- # local ip 00:31:55.904 21:34:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:55.904 21:34:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:55.904 21:34:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.904 21:34:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.904 21:34:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:55.904 21:34:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.904 21:34:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:55.904 21:34:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:55.904 21:34:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:55.904 21:34:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:55.904 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:55.904 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.165 nvme0n1 00:31:56.165 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.165 21:34:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.165 21:34:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:56.165 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.165 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.165 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.165 21:34:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.165 21:34:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.165 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.165 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.165 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.165 21:34:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:56.165 21:34:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:56.165 21:34:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:56.165 21:34:45 -- host/auth.sh@44 -- # digest=sha512 00:31:56.165 21:34:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.165 21:34:45 -- host/auth.sh@44 -- # keyid=1 00:31:56.165 21:34:45 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:56.165 21:34:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:56.165 21:34:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:56.165 21:34:45 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:56.165 21:34:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:31:56.165 21:34:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:56.165 21:34:45 -- host/auth.sh@68 -- # digest=sha512 00:31:56.165 21:34:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:56.165 21:34:45 -- host/auth.sh@68 -- # keyid=1 00:31:56.165 21:34:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:56.165 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.165 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.165 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.165 21:34:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:56.165 21:34:45 -- nvmf/common.sh@717 -- # local ip 00:31:56.165 21:34:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:56.165 21:34:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:56.165 21:34:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.165 21:34:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.165 21:34:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:56.165 21:34:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.165 21:34:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:56.165 21:34:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:56.165 21:34:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:56.165 21:34:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:56.165 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.165 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.165 nvme0n1 00:31:56.165 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.165 21:34:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:56.165 21:34:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.165 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.165 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.166 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.425 21:34:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.425 21:34:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.425 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.425 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.425 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.425 21:34:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:56.425 21:34:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:56.425 21:34:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:56.425 21:34:45 -- host/auth.sh@44 -- # digest=sha512 00:31:56.425 21:34:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.425 21:34:45 -- host/auth.sh@44 -- # keyid=2 00:31:56.425 21:34:45 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:56.425 21:34:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:56.425 21:34:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:56.425 21:34:45 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:56.425 21:34:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:31:56.425 21:34:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:56.425 21:34:45 -- host/auth.sh@68 -- # digest=sha512 00:31:56.425 21:34:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:56.425 21:34:45 -- host/auth.sh@68 -- # keyid=2 00:31:56.425 21:34:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:56.425 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.425 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.425 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.425 21:34:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:56.425 21:34:45 -- nvmf/common.sh@717 -- # local ip 00:31:56.425 21:34:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:56.425 21:34:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:56.425 21:34:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.425 21:34:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.425 21:34:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:56.425 21:34:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.425 21:34:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:56.425 21:34:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:56.425 21:34:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:56.425 21:34:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:56.425 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.425 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.425 nvme0n1 00:31:56.425 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.425 21:34:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.425 21:34:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:56.425 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.425 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.425 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.425 21:34:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.425 21:34:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.425 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.425 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.425 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.425 21:34:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:56.425 21:34:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:56.425 21:34:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:56.425 21:34:45 -- host/auth.sh@44 -- # digest=sha512 00:31:56.425 21:34:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.425 21:34:45 -- host/auth.sh@44 -- # keyid=3 00:31:56.425 21:34:45 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:56.425 21:34:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:56.425 21:34:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:56.425 21:34:45 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:56.425 21:34:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:31:56.425 21:34:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:56.425 21:34:45 -- host/auth.sh@68 -- # digest=sha512 00:31:56.425 21:34:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:56.425 21:34:45 -- host/auth.sh@68 -- # keyid=3 00:31:56.425 21:34:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:56.425 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.425 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.425 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.425 21:34:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:56.425 21:34:45 -- nvmf/common.sh@717 -- # local ip 00:31:56.425 21:34:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:56.685 21:34:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:56.685 21:34:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.685 21:34:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.685 21:34:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:56.685 21:34:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.685 21:34:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:56.685 21:34:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:56.685 21:34:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:56.685 21:34:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:56.685 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.685 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.685 nvme0n1 00:31:56.685 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.685 21:34:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:56.685 21:34:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.685 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.685 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.685 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.685 21:34:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.685 21:34:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.685 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.685 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.685 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.685 21:34:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:56.685 21:34:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:56.685 21:34:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:56.685 21:34:45 -- host/auth.sh@44 -- # digest=sha512 00:31:56.685 21:34:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.685 21:34:45 -- host/auth.sh@44 -- # keyid=4 00:31:56.685 21:34:45 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:56.685 21:34:45 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:56.685 21:34:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:31:56.685 21:34:45 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:56.685 21:34:45 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:31:56.685 21:34:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:56.685 21:34:45 -- host/auth.sh@68 -- # digest=sha512 00:31:56.685 21:34:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:31:56.685 21:34:45 -- host/auth.sh@68 -- # keyid=4 00:31:56.685 21:34:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:56.685 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.685 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.685 21:34:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.685 21:34:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:56.685 21:34:45 -- nvmf/common.sh@717 -- # local ip 00:31:56.685 21:34:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:56.685 21:34:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:56.685 21:34:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.685 21:34:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.685 21:34:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:56.685 21:34:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.685 21:34:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:56.685 21:34:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:56.685 21:34:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:56.685 21:34:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.685 21:34:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.685 21:34:45 -- common/autotest_common.sh@10 -- # set +x 00:31:56.944 nvme0n1 00:31:56.944 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.944 21:34:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.944 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.944 21:34:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:56.944 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:56.944 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.944 21:34:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.944 21:34:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.944 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.944 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:56.944 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.944 21:34:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.944 21:34:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:56.944 21:34:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:56.944 21:34:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:56.944 21:34:46 -- host/auth.sh@44 -- # digest=sha512 00:31:56.944 21:34:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:56.944 21:34:46 -- host/auth.sh@44 -- # keyid=0 00:31:56.944 21:34:46 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:56.944 21:34:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:56.944 21:34:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:56.944 21:34:46 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:56.944 21:34:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:31:56.944 21:34:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:56.944 21:34:46 -- host/auth.sh@68 -- # digest=sha512 00:31:56.944 21:34:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:56.944 21:34:46 -- host/auth.sh@68 -- # keyid=0 00:31:56.944 21:34:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:56.944 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.944 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:56.945 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.945 21:34:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:56.945 21:34:46 -- nvmf/common.sh@717 -- # local ip 00:31:56.945 21:34:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:56.945 21:34:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:56.945 21:34:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.945 21:34:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.945 21:34:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:56.945 21:34:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.945 21:34:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:56.945 21:34:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:56.945 21:34:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:56.945 21:34:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:56.945 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.945 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.204 nvme0n1 00:31:57.204 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.204 21:34:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.204 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.204 21:34:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:57.204 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.204 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.204 21:34:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.204 21:34:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.204 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.204 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.204 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.204 21:34:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:57.204 21:34:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:57.204 21:34:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:57.204 21:34:46 -- host/auth.sh@44 -- # digest=sha512 00:31:57.204 21:34:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.204 21:34:46 -- host/auth.sh@44 -- # keyid=1 00:31:57.204 21:34:46 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:57.204 21:34:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:57.204 21:34:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:57.204 21:34:46 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:57.204 21:34:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:31:57.204 21:34:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:57.204 21:34:46 -- host/auth.sh@68 -- # digest=sha512 00:31:57.204 21:34:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:57.204 21:34:46 -- host/auth.sh@68 -- # keyid=1 00:31:57.204 21:34:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:57.204 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.204 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.204 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.204 21:34:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:57.204 21:34:46 -- nvmf/common.sh@717 -- # local ip 00:31:57.204 21:34:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:57.204 21:34:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:57.204 21:34:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.204 21:34:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.204 21:34:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:57.204 21:34:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.204 21:34:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:57.204 21:34:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:57.204 21:34:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:57.204 21:34:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:57.204 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.204 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.462 nvme0n1 00:31:57.462 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.462 21:34:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.463 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.463 21:34:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:57.463 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.463 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.463 21:34:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.463 21:34:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.463 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.463 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.463 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.463 21:34:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:57.463 21:34:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:57.463 21:34:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:57.463 21:34:46 -- host/auth.sh@44 -- # digest=sha512 00:31:57.463 21:34:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.463 21:34:46 -- host/auth.sh@44 -- # keyid=2 00:31:57.463 21:34:46 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:57.463 21:34:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:57.463 21:34:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:57.463 21:34:46 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:57.463 21:34:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:31:57.463 21:34:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:57.463 21:34:46 -- host/auth.sh@68 -- # digest=sha512 00:31:57.463 21:34:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:57.463 21:34:46 -- host/auth.sh@68 -- # keyid=2 00:31:57.463 21:34:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:57.463 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.463 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.463 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.463 21:34:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:57.463 21:34:46 -- nvmf/common.sh@717 -- # local ip 00:31:57.463 21:34:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:57.463 21:34:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:57.463 21:34:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.463 21:34:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.463 21:34:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:57.463 21:34:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.463 21:34:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:57.463 21:34:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:57.463 21:34:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:57.463 21:34:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:57.463 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.463 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.721 nvme0n1 00:31:57.721 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.721 21:34:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.721 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.721 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.721 21:34:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:57.721 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.721 21:34:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.721 21:34:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.721 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.721 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.721 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.721 21:34:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:57.721 21:34:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:57.721 21:34:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:57.721 21:34:46 -- host/auth.sh@44 -- # digest=sha512 00:31:57.721 21:34:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.721 21:34:46 -- host/auth.sh@44 -- # keyid=3 00:31:57.721 21:34:46 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:57.721 21:34:46 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:57.721 21:34:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:57.721 21:34:46 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:57.721 21:34:46 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:31:57.721 21:34:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:57.721 21:34:46 -- host/auth.sh@68 -- # digest=sha512 00:31:57.721 21:34:46 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:57.721 21:34:46 -- host/auth.sh@68 -- # keyid=3 00:31:57.721 21:34:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:57.721 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.721 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.721 21:34:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.721 21:34:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:57.721 21:34:46 -- nvmf/common.sh@717 -- # local ip 00:31:57.721 21:34:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:57.721 21:34:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:57.721 21:34:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.721 21:34:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.721 21:34:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:57.721 21:34:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.721 21:34:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:57.721 21:34:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:57.721 21:34:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:57.721 21:34:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:57.721 21:34:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.721 21:34:46 -- common/autotest_common.sh@10 -- # set +x 00:31:57.986 nvme0n1 00:31:57.986 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.986 21:34:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:57.986 21:34:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.986 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.986 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:57.986 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.986 21:34:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.986 21:34:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.986 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.986 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:57.986 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.986 21:34:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:57.986 21:34:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:57.986 21:34:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:57.986 21:34:47 -- host/auth.sh@44 -- # digest=sha512 00:31:57.986 21:34:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.986 21:34:47 -- host/auth.sh@44 -- # keyid=4 00:31:57.986 21:34:47 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:57.986 21:34:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:57.986 21:34:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:31:57.986 21:34:47 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:31:57.986 21:34:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:31:57.986 21:34:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:57.986 21:34:47 -- host/auth.sh@68 -- # digest=sha512 00:31:57.986 21:34:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:31:57.986 21:34:47 -- host/auth.sh@68 -- # keyid=4 00:31:57.986 21:34:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:57.986 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.986 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:57.986 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:57.986 21:34:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:57.986 21:34:47 -- nvmf/common.sh@717 -- # local ip 00:31:57.986 21:34:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:57.986 21:34:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:57.986 21:34:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.986 21:34:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.986 21:34:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:57.986 21:34:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.986 21:34:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:57.986 21:34:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:57.986 21:34:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:57.986 21:34:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:57.986 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:57.986 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:58.255 nvme0n1 00:31:58.255 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.255 21:34:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.255 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.255 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:58.255 21:34:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:58.255 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.255 21:34:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.255 21:34:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.255 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.255 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:58.255 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.255 21:34:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.255 21:34:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:58.255 21:34:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:58.255 21:34:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:58.255 21:34:47 -- host/auth.sh@44 -- # digest=sha512 00:31:58.255 21:34:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.255 21:34:47 -- host/auth.sh@44 -- # keyid=0 00:31:58.255 21:34:47 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:58.255 21:34:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:58.255 21:34:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:58.255 21:34:47 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:31:58.255 21:34:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:31:58.255 21:34:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:58.255 21:34:47 -- host/auth.sh@68 -- # digest=sha512 00:31:58.255 21:34:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:58.255 21:34:47 -- host/auth.sh@68 -- # keyid=0 00:31:58.255 21:34:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:58.255 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.255 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:58.255 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.255 21:34:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:58.255 21:34:47 -- nvmf/common.sh@717 -- # local ip 00:31:58.255 21:34:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:58.255 21:34:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:58.255 21:34:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.255 21:34:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.255 21:34:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:58.255 21:34:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.255 21:34:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:58.255 21:34:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:58.255 21:34:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:58.255 21:34:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:31:58.255 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.255 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:58.514 nvme0n1 00:31:58.514 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.514 21:34:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.514 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.514 21:34:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:58.514 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:58.514 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.773 21:34:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.773 21:34:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.773 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.773 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.774 21:34:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:58.774 21:34:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:58.774 21:34:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:58.774 21:34:47 -- host/auth.sh@44 -- # digest=sha512 00:31:58.774 21:34:47 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.774 21:34:47 -- host/auth.sh@44 -- # keyid=1 00:31:58.774 21:34:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:58.774 21:34:47 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:58.774 21:34:47 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:58.774 21:34:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:31:58.774 21:34:47 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:31:58.774 21:34:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:58.774 21:34:47 -- host/auth.sh@68 -- # digest=sha512 00:31:58.774 21:34:47 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:58.774 21:34:47 -- host/auth.sh@68 -- # keyid=1 00:31:58.774 21:34:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:58.774 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.774 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:58.774 21:34:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.774 21:34:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:58.774 21:34:47 -- nvmf/common.sh@717 -- # local ip 00:31:58.774 21:34:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:58.774 21:34:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:58.774 21:34:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.774 21:34:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.774 21:34:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:58.774 21:34:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.774 21:34:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:58.774 21:34:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:58.774 21:34:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:58.774 21:34:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:31:58.774 21:34:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.774 21:34:47 -- common/autotest_common.sh@10 -- # set +x 00:31:59.033 nvme0n1 00:31:59.033 21:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.033 21:34:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.033 21:34:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:59.033 21:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.033 21:34:48 -- common/autotest_common.sh@10 -- # set +x 00:31:59.033 21:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.033 21:34:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.033 21:34:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.033 21:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.033 21:34:48 -- common/autotest_common.sh@10 -- # set +x 00:31:59.033 21:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.033 21:34:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:59.033 21:34:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:59.033 21:34:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:59.033 21:34:48 -- host/auth.sh@44 -- # digest=sha512 00:31:59.033 21:34:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.033 21:34:48 -- host/auth.sh@44 -- # keyid=2 00:31:59.033 21:34:48 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:59.033 21:34:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:59.033 21:34:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:59.033 21:34:48 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:31:59.033 21:34:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:31:59.033 21:34:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:59.033 21:34:48 -- host/auth.sh@68 -- # digest=sha512 00:31:59.033 21:34:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:59.033 21:34:48 -- host/auth.sh@68 -- # keyid=2 00:31:59.033 21:34:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:59.033 21:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.033 21:34:48 -- common/autotest_common.sh@10 -- # set +x 00:31:59.033 21:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.033 21:34:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:59.033 21:34:48 -- nvmf/common.sh@717 -- # local ip 00:31:59.033 21:34:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:59.033 21:34:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:59.033 21:34:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.033 21:34:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.033 21:34:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:59.033 21:34:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.033 21:34:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:59.033 21:34:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:59.033 21:34:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:59.033 21:34:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:31:59.033 21:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.033 21:34:48 -- common/autotest_common.sh@10 -- # set +x 00:31:59.601 nvme0n1 00:31:59.601 21:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.601 21:34:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.601 21:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.601 21:34:48 -- common/autotest_common.sh@10 -- # set +x 00:31:59.601 21:34:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:59.601 21:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.601 21:34:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.601 21:34:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.601 21:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.601 21:34:48 -- common/autotest_common.sh@10 -- # set +x 00:31:59.601 21:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.601 21:34:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:31:59.601 21:34:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:59.601 21:34:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:31:59.601 21:34:48 -- host/auth.sh@44 -- # digest=sha512 00:31:59.601 21:34:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.601 21:34:48 -- host/auth.sh@44 -- # keyid=3 00:31:59.601 21:34:48 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:59.601 21:34:48 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:31:59.601 21:34:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:31:59.601 21:34:48 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:31:59.601 21:34:48 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:31:59.601 21:34:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:31:59.602 21:34:48 -- host/auth.sh@68 -- # digest=sha512 00:31:59.602 21:34:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:31:59.602 21:34:48 -- host/auth.sh@68 -- # keyid=3 00:31:59.602 21:34:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:59.602 21:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.602 21:34:48 -- common/autotest_common.sh@10 -- # set +x 00:31:59.602 21:34:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.602 21:34:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:31:59.602 21:34:48 -- nvmf/common.sh@717 -- # local ip 00:31:59.602 21:34:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:59.602 21:34:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:59.602 21:34:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.602 21:34:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.602 21:34:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:59.602 21:34:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.602 21:34:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:59.602 21:34:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:59.602 21:34:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:59.602 21:34:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:31:59.602 21:34:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.602 21:34:48 -- common/autotest_common.sh@10 -- # set +x 00:31:59.861 nvme0n1 00:31:59.861 21:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:59.861 21:34:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:31:59.861 21:34:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.861 21:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:59.861 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:32:00.120 21:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:00.120 21:34:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.120 21:34:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.120 21:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:00.120 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:32:00.120 21:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:00.120 21:34:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:00.120 21:34:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:00.120 21:34:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:00.120 21:34:49 -- host/auth.sh@44 -- # digest=sha512 00:32:00.120 21:34:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:00.120 21:34:49 -- host/auth.sh@44 -- # keyid=4 00:32:00.120 21:34:49 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:32:00.120 21:34:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:00.120 21:34:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:32:00.120 21:34:49 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:32:00.120 21:34:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:32:00.120 21:34:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:00.120 21:34:49 -- host/auth.sh@68 -- # digest=sha512 00:32:00.120 21:34:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:32:00.120 21:34:49 -- host/auth.sh@68 -- # keyid=4 00:32:00.120 21:34:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:00.120 21:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:00.120 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:32:00.120 21:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:00.120 21:34:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:00.120 21:34:49 -- nvmf/common.sh@717 -- # local ip 00:32:00.120 21:34:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:00.120 21:34:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:00.120 21:34:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.120 21:34:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.120 21:34:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:00.120 21:34:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.120 21:34:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:00.120 21:34:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:00.120 21:34:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:00.120 21:34:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:00.120 21:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:00.120 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:32:00.379 nvme0n1 00:32:00.379 21:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:00.379 21:34:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.379 21:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:00.379 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:32:00.379 21:34:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:00.379 21:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:00.379 21:34:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.379 21:34:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.379 21:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:00.379 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:32:00.379 21:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:00.379 21:34:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.379 21:34:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:00.379 21:34:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:00.379 21:34:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:00.379 21:34:49 -- host/auth.sh@44 -- # digest=sha512 00:32:00.379 21:34:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:00.379 21:34:49 -- host/auth.sh@44 -- # keyid=0 00:32:00.379 21:34:49 -- host/auth.sh@45 -- # key=DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:32:00.379 21:34:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:00.379 21:34:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:00.379 21:34:49 -- host/auth.sh@49 -- # echo DHHC-1:00:OGQ1MGVlZjcwOTcxYTI4ZjNlMjZhOWVlNDljYjQ2NTX8OCsn: 00:32:00.379 21:34:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:32:00.379 21:34:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:00.379 21:34:49 -- host/auth.sh@68 -- # digest=sha512 00:32:00.379 21:34:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:00.379 21:34:49 -- host/auth.sh@68 -- # keyid=0 00:32:00.379 21:34:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:00.379 21:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:00.379 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:32:00.379 21:34:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:00.379 21:34:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:00.379 21:34:49 -- nvmf/common.sh@717 -- # local ip 00:32:00.379 21:34:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:00.379 21:34:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:00.379 21:34:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.379 21:34:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.379 21:34:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:00.379 21:34:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.379 21:34:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:00.379 21:34:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:00.379 21:34:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:00.379 21:34:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:32:00.379 21:34:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:00.379 21:34:49 -- common/autotest_common.sh@10 -- # set +x 00:32:01.315 nvme0n1 00:32:01.315 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:01.315 21:34:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.315 21:34:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:01.315 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:01.315 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:32:01.315 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:01.315 21:34:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.315 21:34:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.315 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:01.315 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:32:01.315 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:01.315 21:34:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:01.315 21:34:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:01.315 21:34:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:01.315 21:34:50 -- host/auth.sh@44 -- # digest=sha512 00:32:01.315 21:34:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:01.315 21:34:50 -- host/auth.sh@44 -- # keyid=1 00:32:01.315 21:34:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:32:01.315 21:34:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:01.315 21:34:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:01.315 21:34:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:32:01.315 21:34:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:32:01.315 21:34:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:01.315 21:34:50 -- host/auth.sh@68 -- # digest=sha512 00:32:01.315 21:34:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:01.315 21:34:50 -- host/auth.sh@68 -- # keyid=1 00:32:01.315 21:34:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:01.315 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:01.315 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:32:01.315 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:01.315 21:34:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:01.315 21:34:50 -- nvmf/common.sh@717 -- # local ip 00:32:01.315 21:34:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:01.315 21:34:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:01.315 21:34:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.315 21:34:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.315 21:34:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:01.315 21:34:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.315 21:34:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:01.315 21:34:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:01.315 21:34:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:01.315 21:34:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:32:01.315 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:01.315 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:32:01.883 nvme0n1 00:32:01.883 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:01.883 21:34:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.883 21:34:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:01.883 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:01.883 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:32:01.883 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:01.883 21:34:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.883 21:34:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.883 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:01.883 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:32:01.883 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:01.883 21:34:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:01.883 21:34:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:01.883 21:34:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:01.883 21:34:50 -- host/auth.sh@44 -- # digest=sha512 00:32:01.883 21:34:50 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:01.883 21:34:50 -- host/auth.sh@44 -- # keyid=2 00:32:01.883 21:34:50 -- host/auth.sh@45 -- # key=DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:32:01.883 21:34:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:01.883 21:34:50 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:01.883 21:34:50 -- host/auth.sh@49 -- # echo DHHC-1:01:MWI1NWRjYTEwNGYwMjFiMzM4YjdmYjE2NGU4OWYwZjMMMYg0: 00:32:01.883 21:34:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:32:01.883 21:34:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:01.883 21:34:50 -- host/auth.sh@68 -- # digest=sha512 00:32:01.883 21:34:50 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:01.883 21:34:50 -- host/auth.sh@68 -- # keyid=2 00:32:01.883 21:34:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:01.883 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:01.883 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:32:01.883 21:34:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:01.883 21:34:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:01.883 21:34:50 -- nvmf/common.sh@717 -- # local ip 00:32:01.883 21:34:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:01.883 21:34:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:01.883 21:34:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.883 21:34:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.883 21:34:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:01.883 21:34:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.883 21:34:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:01.883 21:34:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:01.883 21:34:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:01.883 21:34:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:01.883 21:34:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:01.883 21:34:50 -- common/autotest_common.sh@10 -- # set +x 00:32:02.451 nvme0n1 00:32:02.451 21:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:02.451 21:34:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.451 21:34:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:02.451 21:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:02.451 21:34:51 -- common/autotest_common.sh@10 -- # set +x 00:32:02.451 21:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:02.451 21:34:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.451 21:34:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.451 21:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:02.451 21:34:51 -- common/autotest_common.sh@10 -- # set +x 00:32:02.451 21:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:02.451 21:34:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:02.451 21:34:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:02.451 21:34:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:02.451 21:34:51 -- host/auth.sh@44 -- # digest=sha512 00:32:02.451 21:34:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:02.451 21:34:51 -- host/auth.sh@44 -- # keyid=3 00:32:02.451 21:34:51 -- host/auth.sh@45 -- # key=DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:32:02.451 21:34:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:02.451 21:34:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:02.451 21:34:51 -- host/auth.sh@49 -- # echo DHHC-1:02:YmY2N2JkMmU5MzQyZmI1MzRlNTFlMzMwZmU1ZmYxMTdhN2M4ODUwYTNhZDBiMzU4Xmu9tQ==: 00:32:02.451 21:34:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:32:02.451 21:34:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:02.451 21:34:51 -- host/auth.sh@68 -- # digest=sha512 00:32:02.451 21:34:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:02.451 21:34:51 -- host/auth.sh@68 -- # keyid=3 00:32:02.451 21:34:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:02.451 21:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:02.451 21:34:51 -- common/autotest_common.sh@10 -- # set +x 00:32:02.451 21:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:02.451 21:34:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:02.451 21:34:51 -- nvmf/common.sh@717 -- # local ip 00:32:02.451 21:34:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:02.451 21:34:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:02.451 21:34:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.451 21:34:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.451 21:34:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:02.451 21:34:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.451 21:34:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:02.451 21:34:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:02.451 21:34:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:02.451 21:34:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:32:02.451 21:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:02.451 21:34:51 -- common/autotest_common.sh@10 -- # set +x 00:32:03.018 nvme0n1 00:32:03.018 21:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.018 21:34:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:03.018 21:34:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.018 21:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.018 21:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.018 21:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.276 21:34:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.276 21:34:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.276 21:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.276 21:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.276 21:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.276 21:34:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:32:03.276 21:34:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:03.276 21:34:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:03.276 21:34:52 -- host/auth.sh@44 -- # digest=sha512 00:32:03.276 21:34:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.276 21:34:52 -- host/auth.sh@44 -- # keyid=4 00:32:03.276 21:34:52 -- host/auth.sh@45 -- # key=DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:32:03.276 21:34:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:32:03.276 21:34:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:32:03.276 21:34:52 -- host/auth.sh@49 -- # echo DHHC-1:03:Y2NmOGYwMmJkNzA4ZTJkZDU3NDY2OTI2NDNmZDYxMDFjM2M3ZTE3ZDFlZjJkOTViYWIwNDg3MDc2YzUwY2U0OBjhrpo=: 00:32:03.276 21:34:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:32:03.276 21:34:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:32:03.276 21:34:52 -- host/auth.sh@68 -- # digest=sha512 00:32:03.276 21:34:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:32:03.276 21:34:52 -- host/auth.sh@68 -- # keyid=4 00:32:03.276 21:34:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:03.276 21:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.276 21:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.276 21:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.276 21:34:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:32:03.276 21:34:52 -- nvmf/common.sh@717 -- # local ip 00:32:03.276 21:34:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:03.276 21:34:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:03.276 21:34:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.276 21:34:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.276 21:34:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:03.276 21:34:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.276 21:34:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:03.276 21:34:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:03.276 21:34:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:03.276 21:34:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:03.276 21:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.276 21:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.852 nvme0n1 00:32:03.852 21:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.852 21:34:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.852 21:34:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.852 21:34:52 -- common/autotest_common.sh@10 -- # set +x 00:32:03.852 21:34:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:32:03.852 21:34:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.852 21:34:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.852 21:34:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.852 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.852 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:32:03.852 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.852 21:34:53 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:03.852 21:34:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:32:03.852 21:34:53 -- host/auth.sh@44 -- # digest=sha256 00:32:03.852 21:34:53 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:03.852 21:34:53 -- host/auth.sh@44 -- # keyid=1 00:32:03.852 21:34:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:32:03.852 21:34:53 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:32:03.852 21:34:53 -- host/auth.sh@48 -- # echo ffdhe2048 00:32:03.852 21:34:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MjM1YzgwYTU1ZTlkZjRhODRmZjE1YjBmZWI3YzQyMGFkZjNlZmIyOWQ4MWEyODA3NiT+3g==: 00:32:03.852 21:34:53 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:03.852 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.852 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:32:03.852 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:03.852 21:34:53 -- host/auth.sh@119 -- # get_main_ns_ip 00:32:03.852 21:34:53 -- nvmf/common.sh@717 -- # local ip 00:32:03.852 21:34:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:03.852 21:34:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:03.852 21:34:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.852 21:34:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.852 21:34:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:03.852 21:34:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.852 21:34:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:03.852 21:34:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:03.852 21:34:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:03.852 21:34:53 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:03.852 21:34:53 -- common/autotest_common.sh@638 -- # local es=0 00:32:03.852 21:34:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:03.852 21:34:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:32:03.852 21:34:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:03.852 21:34:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:32:03.852 21:34:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:03.852 21:34:53 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:03.852 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.852 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:32:03.852 2024/04/26 21:34:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:32:03.852 request: 00:32:03.852 { 00:32:03.852 "method": "bdev_nvme_attach_controller", 00:32:03.852 "params": { 00:32:03.852 "name": "nvme0", 00:32:03.852 "trtype": "tcp", 00:32:03.852 "traddr": "10.0.0.1", 00:32:03.852 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:03.852 "adrfam": "ipv4", 00:32:03.852 "trsvcid": "4420", 00:32:03.852 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:32:03.852 } 00:32:03.852 } 00:32:03.852 Got JSON-RPC error response 00:32:03.852 GoRPCClient: error on JSON-RPC call 00:32:03.852 21:34:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:32:03.852 21:34:53 -- common/autotest_common.sh@641 -- # es=1 00:32:03.852 21:34:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:03.852 21:34:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:03.852 21:34:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:03.852 21:34:53 -- host/auth.sh@121 -- # jq length 00:32:03.852 21:34:53 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.852 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:03.852 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:32:03.852 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.121 21:34:53 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:32:04.121 21:34:53 -- host/auth.sh@124 -- # get_main_ns_ip 00:32:04.121 21:34:53 -- nvmf/common.sh@717 -- # local ip 00:32:04.121 21:34:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:04.121 21:34:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:04.121 21:34:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.121 21:34:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.121 21:34:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:04.121 21:34:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.121 21:34:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:04.121 21:34:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:04.121 21:34:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:04.121 21:34:53 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:04.121 21:34:53 -- common/autotest_common.sh@638 -- # local es=0 00:32:04.121 21:34:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:04.121 21:34:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:32:04.121 21:34:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:04.121 21:34:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:32:04.121 21:34:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:04.121 21:34:53 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:04.121 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.121 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:32:04.121 2024/04/26 21:34:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:32:04.121 request: 00:32:04.121 { 00:32:04.121 "method": "bdev_nvme_attach_controller", 00:32:04.121 "params": { 00:32:04.121 "name": "nvme0", 00:32:04.121 "trtype": "tcp", 00:32:04.121 "traddr": "10.0.0.1", 00:32:04.121 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:04.121 "adrfam": "ipv4", 00:32:04.121 "trsvcid": "4420", 00:32:04.121 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:04.121 "dhchap_key": "key2" 00:32:04.121 } 00:32:04.121 } 00:32:04.121 Got JSON-RPC error response 00:32:04.121 GoRPCClient: error on JSON-RPC call 00:32:04.121 21:34:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:32:04.121 21:34:53 -- common/autotest_common.sh@641 -- # es=1 00:32:04.121 21:34:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:04.121 21:34:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:04.121 21:34:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:04.121 21:34:53 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.121 21:34:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:04.121 21:34:53 -- common/autotest_common.sh@10 -- # set +x 00:32:04.121 21:34:53 -- host/auth.sh@127 -- # jq length 00:32:04.121 21:34:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:04.121 21:34:53 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:32:04.121 21:34:53 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:32:04.121 21:34:53 -- host/auth.sh@130 -- # cleanup 00:32:04.121 21:34:53 -- host/auth.sh@24 -- # nvmftestfini 00:32:04.121 21:34:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:04.121 21:34:53 -- nvmf/common.sh@117 -- # sync 00:32:04.121 21:34:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:04.121 21:34:53 -- nvmf/common.sh@120 -- # set +e 00:32:04.121 21:34:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:04.121 21:34:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:04.121 rmmod nvme_tcp 00:32:04.121 rmmod nvme_fabrics 00:32:04.121 21:34:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:04.121 21:34:53 -- nvmf/common.sh@124 -- # set -e 00:32:04.121 21:34:53 -- nvmf/common.sh@125 -- # return 0 00:32:04.121 21:34:53 -- nvmf/common.sh@478 -- # '[' -n 102740 ']' 00:32:04.121 21:34:53 -- nvmf/common.sh@479 -- # killprocess 102740 00:32:04.121 21:34:53 -- common/autotest_common.sh@936 -- # '[' -z 102740 ']' 00:32:04.121 21:34:53 -- common/autotest_common.sh@940 -- # kill -0 102740 00:32:04.121 21:34:53 -- common/autotest_common.sh@941 -- # uname 00:32:04.121 21:34:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:04.121 21:34:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102740 00:32:04.121 21:34:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:04.121 21:34:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:04.121 killing process with pid 102740 00:32:04.121 21:34:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102740' 00:32:04.121 21:34:53 -- common/autotest_common.sh@955 -- # kill 102740 00:32:04.121 21:34:53 -- common/autotest_common.sh@960 -- # wait 102740 00:32:04.381 21:34:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:04.381 21:34:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:04.381 21:34:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:04.381 21:34:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:04.381 21:34:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:04.381 21:34:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.381 21:34:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.381 21:34:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.381 21:34:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:04.381 21:34:53 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:04.381 21:34:53 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:04.381 21:34:53 -- host/auth.sh@27 -- # clean_kernel_target 00:32:04.381 21:34:53 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:04.381 21:34:53 -- nvmf/common.sh@675 -- # echo 0 00:32:04.381 21:34:53 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:04.381 21:34:53 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:04.381 21:34:53 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:04.381 21:34:53 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:04.381 21:34:53 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:32:04.381 21:34:53 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:32:04.381 21:34:53 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:05.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:05.317 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:05.317 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:05.317 21:34:54 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tPQ /tmp/spdk.key-null.dFH /tmp/spdk.key-sha256.iG1 /tmp/spdk.key-sha384.53n /tmp/spdk.key-sha512.nvo /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:32:05.317 21:34:54 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:05.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:05.884 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:05.884 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:05.884 00:32:05.884 real 0m37.729s 00:32:05.884 user 0m33.952s 00:32:05.884 sys 0m4.180s 00:32:05.884 21:34:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:05.884 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:32:05.884 ************************************ 00:32:05.884 END TEST nvmf_auth 00:32:05.884 ************************************ 00:32:05.884 21:34:55 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:32:05.884 21:34:55 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:05.884 21:34:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:05.884 21:34:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:05.884 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.143 ************************************ 00:32:06.143 START TEST nvmf_digest 00:32:06.143 ************************************ 00:32:06.144 21:34:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:06.144 * Looking for test storage... 00:32:06.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:06.144 21:34:55 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:06.144 21:34:55 -- nvmf/common.sh@7 -- # uname -s 00:32:06.144 21:34:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.144 21:34:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.144 21:34:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.144 21:34:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.144 21:34:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.144 21:34:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.144 21:34:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.144 21:34:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.144 21:34:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.144 21:34:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.144 21:34:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:32:06.144 21:34:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:32:06.144 21:34:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.144 21:34:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.144 21:34:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:06.144 21:34:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.144 21:34:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:06.144 21:34:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.144 21:34:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.144 21:34:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.144 21:34:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.144 21:34:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.144 21:34:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.144 21:34:55 -- paths/export.sh@5 -- # export PATH 00:32:06.144 21:34:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.144 21:34:55 -- nvmf/common.sh@47 -- # : 0 00:32:06.144 21:34:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:06.144 21:34:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:06.144 21:34:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.144 21:34:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.144 21:34:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.144 21:34:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:06.144 21:34:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:06.144 21:34:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:06.144 21:34:55 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:06.144 21:34:55 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:06.144 21:34:55 -- host/digest.sh@16 -- # runtime=2 00:32:06.144 21:34:55 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:06.144 21:34:55 -- host/digest.sh@138 -- # nvmftestinit 00:32:06.144 21:34:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:06.144 21:34:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.144 21:34:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:06.144 21:34:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:06.144 21:34:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:06.144 21:34:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.144 21:34:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:06.144 21:34:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.144 21:34:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:32:06.144 21:34:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:32:06.144 21:34:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:32:06.144 21:34:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:32:06.144 21:34:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:32:06.144 21:34:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:32:06.144 21:34:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.144 21:34:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.144 21:34:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:06.144 21:34:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:06.144 21:34:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:06.144 21:34:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:06.144 21:34:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:06.144 21:34:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.144 21:34:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:06.144 21:34:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:06.144 21:34:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:06.144 21:34:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:06.144 21:34:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:06.144 21:34:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:06.144 Cannot find device "nvmf_tgt_br" 00:32:06.144 21:34:55 -- nvmf/common.sh@155 -- # true 00:32:06.144 21:34:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:06.144 Cannot find device "nvmf_tgt_br2" 00:32:06.144 21:34:55 -- nvmf/common.sh@156 -- # true 00:32:06.144 21:34:55 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:06.144 21:34:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:06.144 Cannot find device "nvmf_tgt_br" 00:32:06.144 21:34:55 -- nvmf/common.sh@158 -- # true 00:32:06.144 21:34:55 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:06.144 Cannot find device "nvmf_tgt_br2" 00:32:06.144 21:34:55 -- nvmf/common.sh@159 -- # true 00:32:06.144 21:34:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:06.404 21:34:55 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:06.404 21:34:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:06.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:06.404 21:34:55 -- nvmf/common.sh@162 -- # true 00:32:06.404 21:34:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:06.404 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:06.404 21:34:55 -- nvmf/common.sh@163 -- # true 00:32:06.404 21:34:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:06.404 21:34:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:06.404 21:34:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:06.404 21:34:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:06.404 21:34:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:06.404 21:34:55 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:06.404 21:34:55 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:06.404 21:34:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:06.404 21:34:55 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:06.404 21:34:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:06.404 21:34:55 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:06.404 21:34:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:06.404 21:34:55 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:06.404 21:34:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:06.404 21:34:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:06.404 21:34:55 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:06.404 21:34:55 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:06.404 21:34:55 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:06.404 21:34:55 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:06.404 21:34:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:06.404 21:34:55 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:06.404 21:34:55 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:06.404 21:34:55 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:06.404 21:34:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:06.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:32:06.404 00:32:06.404 --- 10.0.0.2 ping statistics --- 00:32:06.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.404 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:32:06.404 21:34:55 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:06.404 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:06.404 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:32:06.404 00:32:06.404 --- 10.0.0.3 ping statistics --- 00:32:06.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.404 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:32:06.404 21:34:55 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:06.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:32:06.404 00:32:06.404 --- 10.0.0.1 ping statistics --- 00:32:06.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.404 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:32:06.404 21:34:55 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.404 21:34:55 -- nvmf/common.sh@422 -- # return 0 00:32:06.404 21:34:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:32:06.404 21:34:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.404 21:34:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:06.404 21:34:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:06.404 21:34:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.404 21:34:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:06.404 21:34:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:06.404 21:34:55 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:06.404 21:34:55 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:06.404 21:34:55 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:06.404 21:34:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:06.404 21:34:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:06.404 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.404 ************************************ 00:32:06.404 START TEST nvmf_digest_clean 00:32:06.404 ************************************ 00:32:06.404 21:34:55 -- common/autotest_common.sh@1111 -- # run_digest 00:32:06.404 21:34:55 -- host/digest.sh@120 -- # local dsa_initiator 00:32:06.404 21:34:55 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:06.404 21:34:55 -- host/digest.sh@121 -- # dsa_initiator=false 00:32:06.404 21:34:55 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:06.404 21:34:55 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:06.404 21:34:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:06.404 21:34:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:06.404 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.662 21:34:55 -- nvmf/common.sh@470 -- # nvmfpid=104351 00:32:06.663 21:34:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:06.663 21:34:55 -- nvmf/common.sh@471 -- # waitforlisten 104351 00:32:06.663 21:34:55 -- common/autotest_common.sh@817 -- # '[' -z 104351 ']' 00:32:06.663 21:34:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.663 21:34:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:06.663 21:34:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.663 21:34:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:06.663 21:34:55 -- common/autotest_common.sh@10 -- # set +x 00:32:06.663 [2024-04-26 21:34:55.716519] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:06.663 [2024-04-26 21:34:55.716599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.663 [2024-04-26 21:34:55.859217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.663 [2024-04-26 21:34:55.912058] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.663 [2024-04-26 21:34:55.912115] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.663 [2024-04-26 21:34:55.912123] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.663 [2024-04-26 21:34:55.912129] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.663 [2024-04-26 21:34:55.912134] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.663 [2024-04-26 21:34:55.912163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.599 21:34:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:07.599 21:34:56 -- common/autotest_common.sh@850 -- # return 0 00:32:07.599 21:34:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:07.599 21:34:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:07.599 21:34:56 -- common/autotest_common.sh@10 -- # set +x 00:32:07.599 21:34:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.599 21:34:56 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:07.599 21:34:56 -- host/digest.sh@126 -- # common_target_config 00:32:07.599 21:34:56 -- host/digest.sh@43 -- # rpc_cmd 00:32:07.599 21:34:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:07.599 21:34:56 -- common/autotest_common.sh@10 -- # set +x 00:32:07.599 null0 00:32:07.599 [2024-04-26 21:34:56.832670] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:07.859 [2024-04-26 21:34:56.864755] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.859 21:34:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:07.859 21:34:56 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:07.859 21:34:56 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:07.859 21:34:56 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:07.859 21:34:56 -- host/digest.sh@80 -- # rw=randread 00:32:07.859 21:34:56 -- host/digest.sh@80 -- # bs=4096 00:32:07.859 21:34:56 -- host/digest.sh@80 -- # qd=128 00:32:07.859 21:34:56 -- host/digest.sh@80 -- # scan_dsa=false 00:32:07.859 21:34:56 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:07.859 21:34:56 -- host/digest.sh@83 -- # bperfpid=104401 00:32:07.859 21:34:56 -- host/digest.sh@84 -- # waitforlisten 104401 /var/tmp/bperf.sock 00:32:07.859 21:34:56 -- common/autotest_common.sh@817 -- # '[' -z 104401 ']' 00:32:07.859 21:34:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:07.859 21:34:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:07.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:07.859 21:34:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:07.859 21:34:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:07.859 21:34:56 -- common/autotest_common.sh@10 -- # set +x 00:32:07.859 [2024-04-26 21:34:56.923434] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:07.859 [2024-04-26 21:34:56.924051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104401 ] 00:32:07.859 [2024-04-26 21:34:57.056919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.119 [2024-04-26 21:34:57.130728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.054 21:34:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:09.054 21:34:57 -- common/autotest_common.sh@850 -- # return 0 00:32:09.054 21:34:57 -- host/digest.sh@86 -- # false 00:32:09.054 21:34:57 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:09.054 21:34:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:09.314 21:34:58 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:09.314 21:34:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:09.573 nvme0n1 00:32:09.573 21:34:58 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:09.573 21:34:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:09.573 Running I/O for 2 seconds... 00:32:12.105 00:32:12.105 Latency(us) 00:32:12.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.105 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:12.105 nvme0n1 : 2.01 19409.11 75.82 0.00 0.00 6587.40 3534.37 18544.68 00:32:12.105 =================================================================================================================== 00:32:12.105 Total : 19409.11 75.82 0.00 0.00 6587.40 3534.37 18544.68 00:32:12.105 0 00:32:12.105 21:35:00 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:12.105 21:35:00 -- host/digest.sh@93 -- # get_accel_stats 00:32:12.105 21:35:00 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:12.105 | select(.opcode=="crc32c") 00:32:12.105 | "\(.module_name) \(.executed)"' 00:32:12.105 21:35:00 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:12.105 21:35:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:12.105 21:35:01 -- host/digest.sh@94 -- # false 00:32:12.105 21:35:01 -- host/digest.sh@94 -- # exp_module=software 00:32:12.105 21:35:01 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:12.105 21:35:01 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:12.105 21:35:01 -- host/digest.sh@98 -- # killprocess 104401 00:32:12.105 21:35:01 -- common/autotest_common.sh@936 -- # '[' -z 104401 ']' 00:32:12.105 21:35:01 -- common/autotest_common.sh@940 -- # kill -0 104401 00:32:12.105 21:35:01 -- common/autotest_common.sh@941 -- # uname 00:32:12.105 21:35:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:12.105 21:35:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104401 00:32:12.105 21:35:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:12.105 21:35:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:12.105 killing process with pid 104401 00:32:12.105 21:35:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104401' 00:32:12.105 21:35:01 -- common/autotest_common.sh@955 -- # kill 104401 00:32:12.105 Received shutdown signal, test time was about 2.000000 seconds 00:32:12.105 00:32:12.105 Latency(us) 00:32:12.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.105 =================================================================================================================== 00:32:12.105 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:12.105 21:35:01 -- common/autotest_common.sh@960 -- # wait 104401 00:32:12.105 21:35:01 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:12.105 21:35:01 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:12.105 21:35:01 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:12.105 21:35:01 -- host/digest.sh@80 -- # rw=randread 00:32:12.105 21:35:01 -- host/digest.sh@80 -- # bs=131072 00:32:12.106 21:35:01 -- host/digest.sh@80 -- # qd=16 00:32:12.106 21:35:01 -- host/digest.sh@80 -- # scan_dsa=false 00:32:12.106 21:35:01 -- host/digest.sh@83 -- # bperfpid=104492 00:32:12.106 21:35:01 -- host/digest.sh@84 -- # waitforlisten 104492 /var/tmp/bperf.sock 00:32:12.106 21:35:01 -- common/autotest_common.sh@817 -- # '[' -z 104492 ']' 00:32:12.106 21:35:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:12.106 21:35:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:12.106 21:35:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:12.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:12.106 21:35:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:12.106 21:35:01 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:12.106 21:35:01 -- common/autotest_common.sh@10 -- # set +x 00:32:12.364 [2024-04-26 21:35:01.366830] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:12.364 [2024-04-26 21:35:01.366920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104492 ] 00:32:12.364 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:12.364 Zero copy mechanism will not be used. 00:32:12.364 [2024-04-26 21:35:01.499678] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.365 [2024-04-26 21:35:01.570590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.307 21:35:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:13.307 21:35:02 -- common/autotest_common.sh@850 -- # return 0 00:32:13.307 21:35:02 -- host/digest.sh@86 -- # false 00:32:13.307 21:35:02 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:13.307 21:35:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:13.566 21:35:02 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.566 21:35:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:13.835 nvme0n1 00:32:13.835 21:35:02 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:13.835 21:35:02 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:13.835 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:13.836 Zero copy mechanism will not be used. 00:32:13.836 Running I/O for 2 seconds... 00:32:16.372 00:32:16.372 Latency(us) 00:32:16.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.372 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:16.372 nvme0n1 : 2.00 8175.07 1021.88 0.00 0.00 1953.85 600.99 4464.46 00:32:16.372 =================================================================================================================== 00:32:16.372 Total : 8175.07 1021.88 0.00 0.00 1953.85 600.99 4464.46 00:32:16.372 0 00:32:16.372 21:35:05 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:16.372 21:35:05 -- host/digest.sh@93 -- # get_accel_stats 00:32:16.372 21:35:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:16.372 21:35:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:16.372 | select(.opcode=="crc32c") 00:32:16.372 | "\(.module_name) \(.executed)"' 00:32:16.372 21:35:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:16.372 21:35:05 -- host/digest.sh@94 -- # false 00:32:16.372 21:35:05 -- host/digest.sh@94 -- # exp_module=software 00:32:16.372 21:35:05 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:16.372 21:35:05 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:16.372 21:35:05 -- host/digest.sh@98 -- # killprocess 104492 00:32:16.372 21:35:05 -- common/autotest_common.sh@936 -- # '[' -z 104492 ']' 00:32:16.372 21:35:05 -- common/autotest_common.sh@940 -- # kill -0 104492 00:32:16.372 21:35:05 -- common/autotest_common.sh@941 -- # uname 00:32:16.372 21:35:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:16.372 21:35:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104492 00:32:16.372 21:35:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:16.372 21:35:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:16.372 killing process with pid 104492 00:32:16.372 21:35:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104492' 00:32:16.372 21:35:05 -- common/autotest_common.sh@955 -- # kill 104492 00:32:16.372 Received shutdown signal, test time was about 2.000000 seconds 00:32:16.372 00:32:16.372 Latency(us) 00:32:16.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.372 =================================================================================================================== 00:32:16.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:16.372 21:35:05 -- common/autotest_common.sh@960 -- # wait 104492 00:32:16.372 21:35:05 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:16.372 21:35:05 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:16.372 21:35:05 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:16.372 21:35:05 -- host/digest.sh@80 -- # rw=randwrite 00:32:16.372 21:35:05 -- host/digest.sh@80 -- # bs=4096 00:32:16.372 21:35:05 -- host/digest.sh@80 -- # qd=128 00:32:16.372 21:35:05 -- host/digest.sh@80 -- # scan_dsa=false 00:32:16.372 21:35:05 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:16.372 21:35:05 -- host/digest.sh@83 -- # bperfpid=104582 00:32:16.372 21:35:05 -- host/digest.sh@84 -- # waitforlisten 104582 /var/tmp/bperf.sock 00:32:16.372 21:35:05 -- common/autotest_common.sh@817 -- # '[' -z 104582 ']' 00:32:16.372 21:35:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:16.372 21:35:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:16.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:16.372 21:35:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:16.372 21:35:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:16.372 21:35:05 -- common/autotest_common.sh@10 -- # set +x 00:32:16.372 [2024-04-26 21:35:05.529911] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:16.372 [2024-04-26 21:35:05.530000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104582 ] 00:32:16.632 [2024-04-26 21:35:05.656242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.632 [2024-04-26 21:35:05.728990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.632 21:35:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:16.632 21:35:05 -- common/autotest_common.sh@850 -- # return 0 00:32:16.632 21:35:05 -- host/digest.sh@86 -- # false 00:32:16.632 21:35:05 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:16.632 21:35:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:17.198 21:35:06 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:17.198 21:35:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:17.198 nvme0n1 00:32:17.198 21:35:06 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:17.199 21:35:06 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:17.506 Running I/O for 2 seconds... 00:32:19.407 00:32:19.407 Latency(us) 00:32:19.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.407 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:19.407 nvme0n1 : 2.00 22312.02 87.16 0.00 0.00 5731.45 2833.22 11676.28 00:32:19.407 =================================================================================================================== 00:32:19.407 Total : 22312.02 87.16 0.00 0.00 5731.45 2833.22 11676.28 00:32:19.407 0 00:32:19.407 21:35:08 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:19.407 21:35:08 -- host/digest.sh@93 -- # get_accel_stats 00:32:19.407 21:35:08 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:19.407 21:35:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:19.407 21:35:08 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:19.407 | select(.opcode=="crc32c") 00:32:19.407 | "\(.module_name) \(.executed)"' 00:32:19.666 21:35:08 -- host/digest.sh@94 -- # false 00:32:19.666 21:35:08 -- host/digest.sh@94 -- # exp_module=software 00:32:19.666 21:35:08 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:19.666 21:35:08 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:19.666 21:35:08 -- host/digest.sh@98 -- # killprocess 104582 00:32:19.666 21:35:08 -- common/autotest_common.sh@936 -- # '[' -z 104582 ']' 00:32:19.666 21:35:08 -- common/autotest_common.sh@940 -- # kill -0 104582 00:32:19.666 21:35:08 -- common/autotest_common.sh@941 -- # uname 00:32:19.666 21:35:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:19.666 21:35:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104582 00:32:19.666 21:35:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:19.666 21:35:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:19.666 21:35:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104582' 00:32:19.666 killing process with pid 104582 00:32:19.666 21:35:08 -- common/autotest_common.sh@955 -- # kill 104582 00:32:19.666 Received shutdown signal, test time was about 2.000000 seconds 00:32:19.666 00:32:19.666 Latency(us) 00:32:19.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.666 =================================================================================================================== 00:32:19.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:19.666 21:35:08 -- common/autotest_common.sh@960 -- # wait 104582 00:32:19.925 21:35:09 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:19.926 21:35:09 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:19.926 21:35:09 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:19.926 21:35:09 -- host/digest.sh@80 -- # rw=randwrite 00:32:19.926 21:35:09 -- host/digest.sh@80 -- # bs=131072 00:32:19.926 21:35:09 -- host/digest.sh@80 -- # qd=16 00:32:19.926 21:35:09 -- host/digest.sh@80 -- # scan_dsa=false 00:32:19.926 21:35:09 -- host/digest.sh@83 -- # bperfpid=104653 00:32:19.926 21:35:09 -- host/digest.sh@84 -- # waitforlisten 104653 /var/tmp/bperf.sock 00:32:19.926 21:35:09 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:19.926 21:35:09 -- common/autotest_common.sh@817 -- # '[' -z 104653 ']' 00:32:19.926 21:35:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:19.926 21:35:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:19.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:19.926 21:35:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:19.926 21:35:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:19.926 21:35:09 -- common/autotest_common.sh@10 -- # set +x 00:32:19.926 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:19.926 Zero copy mechanism will not be used. 00:32:19.926 [2024-04-26 21:35:09.073016] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:19.926 [2024-04-26 21:35:09.073083] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104653 ] 00:32:20.184 [2024-04-26 21:35:09.197109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.184 [2024-04-26 21:35:09.249367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.791 21:35:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:20.791 21:35:09 -- common/autotest_common.sh@850 -- # return 0 00:32:20.791 21:35:09 -- host/digest.sh@86 -- # false 00:32:20.791 21:35:09 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:20.791 21:35:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:21.049 21:35:10 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.049 21:35:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:21.618 nvme0n1 00:32:21.618 21:35:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:21.618 21:35:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:21.618 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:21.618 Zero copy mechanism will not be used. 00:32:21.618 Running I/O for 2 seconds... 00:32:23.521 00:32:23.521 Latency(us) 00:32:23.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.521 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:23.521 nvme0n1 : 2.00 8639.29 1079.91 0.00 0.00 1848.21 1438.07 3391.27 00:32:23.521 =================================================================================================================== 00:32:23.521 Total : 8639.29 1079.91 0.00 0.00 1848.21 1438.07 3391.27 00:32:23.521 0 00:32:23.521 21:35:12 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:23.521 21:35:12 -- host/digest.sh@93 -- # get_accel_stats 00:32:23.521 21:35:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:23.521 21:35:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:23.521 21:35:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:23.521 | select(.opcode=="crc32c") 00:32:23.521 | "\(.module_name) \(.executed)"' 00:32:23.780 21:35:12 -- host/digest.sh@94 -- # false 00:32:23.780 21:35:12 -- host/digest.sh@94 -- # exp_module=software 00:32:23.780 21:35:12 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:23.780 21:35:12 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:23.780 21:35:12 -- host/digest.sh@98 -- # killprocess 104653 00:32:23.780 21:35:12 -- common/autotest_common.sh@936 -- # '[' -z 104653 ']' 00:32:23.780 21:35:12 -- common/autotest_common.sh@940 -- # kill -0 104653 00:32:23.780 21:35:12 -- common/autotest_common.sh@941 -- # uname 00:32:23.780 21:35:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:23.780 21:35:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104653 00:32:23.780 killing process with pid 104653 00:32:23.780 Received shutdown signal, test time was about 2.000000 seconds 00:32:23.780 00:32:23.780 Latency(us) 00:32:23.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.780 =================================================================================================================== 00:32:23.780 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:23.780 21:35:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:23.780 21:35:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:23.780 21:35:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104653' 00:32:23.780 21:35:12 -- common/autotest_common.sh@955 -- # kill 104653 00:32:23.780 21:35:12 -- common/autotest_common.sh@960 -- # wait 104653 00:32:24.038 21:35:13 -- host/digest.sh@132 -- # killprocess 104351 00:32:24.038 21:35:13 -- common/autotest_common.sh@936 -- # '[' -z 104351 ']' 00:32:24.038 21:35:13 -- common/autotest_common.sh@940 -- # kill -0 104351 00:32:24.038 21:35:13 -- common/autotest_common.sh@941 -- # uname 00:32:24.038 21:35:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:24.038 21:35:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104351 00:32:24.038 killing process with pid 104351 00:32:24.038 21:35:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:24.038 21:35:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:24.038 21:35:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104351' 00:32:24.038 21:35:13 -- common/autotest_common.sh@955 -- # kill 104351 00:32:24.038 21:35:13 -- common/autotest_common.sh@960 -- # wait 104351 00:32:24.296 00:32:24.296 real 0m17.751s 00:32:24.296 user 0m33.909s 00:32:24.296 sys 0m4.349s 00:32:24.297 ************************************ 00:32:24.297 END TEST nvmf_digest_clean 00:32:24.297 ************************************ 00:32:24.297 21:35:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:24.297 21:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.297 21:35:13 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:24.297 21:35:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:24.297 21:35:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:24.297 21:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.297 ************************************ 00:32:24.297 START TEST nvmf_digest_error 00:32:24.297 ************************************ 00:32:24.297 21:35:13 -- common/autotest_common.sh@1111 -- # run_digest_error 00:32:24.297 21:35:13 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:24.297 21:35:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:24.297 21:35:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:24.297 21:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.556 21:35:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:24.556 21:35:13 -- nvmf/common.sh@470 -- # nvmfpid=104769 00:32:24.556 21:35:13 -- nvmf/common.sh@471 -- # waitforlisten 104769 00:32:24.556 21:35:13 -- common/autotest_common.sh@817 -- # '[' -z 104769 ']' 00:32:24.556 21:35:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:24.556 21:35:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:24.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:24.556 21:35:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:24.556 21:35:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:24.556 21:35:13 -- common/autotest_common.sh@10 -- # set +x 00:32:24.556 [2024-04-26 21:35:13.615292] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:24.556 [2024-04-26 21:35:13.615408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:24.556 [2024-04-26 21:35:13.755549] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.814 [2024-04-26 21:35:13.807511] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:24.814 [2024-04-26 21:35:13.807564] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:24.814 [2024-04-26 21:35:13.807571] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:24.814 [2024-04-26 21:35:13.807576] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:24.814 [2024-04-26 21:35:13.807581] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:24.814 [2024-04-26 21:35:13.807604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.381 21:35:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:25.382 21:35:14 -- common/autotest_common.sh@850 -- # return 0 00:32:25.382 21:35:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:25.382 21:35:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:25.382 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.382 21:35:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.382 21:35:14 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:25.382 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.382 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.382 [2024-04-26 21:35:14.586601] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:25.382 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.382 21:35:14 -- host/digest.sh@105 -- # common_target_config 00:32:25.382 21:35:14 -- host/digest.sh@43 -- # rpc_cmd 00:32:25.382 21:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:25.382 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.640 null0 00:32:25.640 [2024-04-26 21:35:14.678606] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.640 [2024-04-26 21:35:14.702662] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.640 21:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:25.640 21:35:14 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:25.640 21:35:14 -- host/digest.sh@54 -- # local rw bs qd 00:32:25.640 21:35:14 -- host/digest.sh@56 -- # rw=randread 00:32:25.640 21:35:14 -- host/digest.sh@56 -- # bs=4096 00:32:25.640 21:35:14 -- host/digest.sh@56 -- # qd=128 00:32:25.640 21:35:14 -- host/digest.sh@58 -- # bperfpid=104819 00:32:25.641 21:35:14 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:25.641 21:35:14 -- host/digest.sh@60 -- # waitforlisten 104819 /var/tmp/bperf.sock 00:32:25.641 21:35:14 -- common/autotest_common.sh@817 -- # '[' -z 104819 ']' 00:32:25.641 21:35:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:25.641 21:35:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:25.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:25.641 21:35:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:25.641 21:35:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:25.641 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:32:25.641 [2024-04-26 21:35:14.765041] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:25.641 [2024-04-26 21:35:14.765143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104819 ] 00:32:25.899 [2024-04-26 21:35:14.915028] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.899 [2024-04-26 21:35:14.969077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.465 21:35:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:26.465 21:35:15 -- common/autotest_common.sh@850 -- # return 0 00:32:26.465 21:35:15 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:26.465 21:35:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:26.723 21:35:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:26.723 21:35:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.723 21:35:15 -- common/autotest_common.sh@10 -- # set +x 00:32:26.723 21:35:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.723 21:35:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.723 21:35:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.983 nvme0n1 00:32:26.983 21:35:16 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:26.983 21:35:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:26.983 21:35:16 -- common/autotest_common.sh@10 -- # set +x 00:32:26.983 21:35:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:26.983 21:35:16 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:26.983 21:35:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:27.242 Running I/O for 2 seconds... 00:32:27.242 [2024-04-26 21:35:16.354055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.242 [2024-04-26 21:35:16.354122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.242 [2024-04-26 21:35:16.354134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.242 [2024-04-26 21:35:16.369223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.242 [2024-04-26 21:35:16.369290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.242 [2024-04-26 21:35:16.369303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.242 [2024-04-26 21:35:16.384579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.242 [2024-04-26 21:35:16.384645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.242 [2024-04-26 21:35:16.384656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.242 [2024-04-26 21:35:16.395137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.242 [2024-04-26 21:35:16.395189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.242 [2024-04-26 21:35:16.395200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.243 [2024-04-26 21:35:16.409453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.243 [2024-04-26 21:35:16.409505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.243 [2024-04-26 21:35:16.409516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.243 [2024-04-26 21:35:16.421784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.243 [2024-04-26 21:35:16.421844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.243 [2024-04-26 21:35:16.421857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.243 [2024-04-26 21:35:16.437067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.243 [2024-04-26 21:35:16.437123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.243 [2024-04-26 21:35:16.437134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.243 [2024-04-26 21:35:16.452293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.243 [2024-04-26 21:35:16.452363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.243 [2024-04-26 21:35:16.452375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.243 [2024-04-26 21:35:16.466186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.243 [2024-04-26 21:35:16.466247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.243 [2024-04-26 21:35:16.466258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.243 [2024-04-26 21:35:16.478601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.243 [2024-04-26 21:35:16.478660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.243 [2024-04-26 21:35:16.478671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.243 [2024-04-26 21:35:16.493988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.243 [2024-04-26 21:35:16.494048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.243 [2024-04-26 21:35:16.494059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.508144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.508208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.508220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.520409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.520461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.520471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.533893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.533952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.533963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.549812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.549872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.549884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.563388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.563448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.563459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.578133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.578195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.578206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.591264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.591327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.591351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.606667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.606731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.606744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.620503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.620564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.620575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.631831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.631897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.631910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.647216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.647278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.647289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.661701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.502 [2024-04-26 21:35:16.661776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.502 [2024-04-26 21:35:16.661789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.502 [2024-04-26 21:35:16.672981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.503 [2024-04-26 21:35:16.673034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.503 [2024-04-26 21:35:16.673045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.503 [2024-04-26 21:35:16.688505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.503 [2024-04-26 21:35:16.688554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.503 [2024-04-26 21:35:16.688565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.503 [2024-04-26 21:35:16.703145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.503 [2024-04-26 21:35:16.703216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.503 [2024-04-26 21:35:16.703230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.503 [2024-04-26 21:35:16.715854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.503 [2024-04-26 21:35:16.715923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.503 [2024-04-26 21:35:16.715934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.503 [2024-04-26 21:35:16.731031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.503 [2024-04-26 21:35:16.731088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.503 [2024-04-26 21:35:16.731100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.503 [2024-04-26 21:35:16.745251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.503 [2024-04-26 21:35:16.745307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.503 [2024-04-26 21:35:16.745318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.757785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.757858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.757874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.774079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.774135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.774146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.786900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.786952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.786963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.800075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.800135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.800146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.814187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.814240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.814253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.827761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.827813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.827824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.840483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.840535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.840547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.854954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.855003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.855014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.868682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.868747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.868759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.882364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.882428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.882440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.897141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.897200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.897211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.911255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.911312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.911323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.924837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.924893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.924903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.936304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.936384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.936399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.950174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.950234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.950245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.963321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.963391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.963402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.976839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.976909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.976922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:16.990765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:16.990823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:16.990834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.763 [2024-04-26 21:35:17.001780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:27.763 [2024-04-26 21:35:17.001832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.763 [2024-04-26 21:35:17.001843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.016144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.016204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.023 [2024-04-26 21:35:17.016215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.029972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.030029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.023 [2024-04-26 21:35:17.030040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.043242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.043300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.023 [2024-04-26 21:35:17.043311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.057196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.057256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.023 [2024-04-26 21:35:17.057268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.071622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.071686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.023 [2024-04-26 21:35:17.071715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.085172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.085233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.023 [2024-04-26 21:35:17.085244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.098231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.098286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.023 [2024-04-26 21:35:17.098297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.111048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.111106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.023 [2024-04-26 21:35:17.111117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.123435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.123499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.023 [2024-04-26 21:35:17.123510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.023 [2024-04-26 21:35:17.138678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.023 [2024-04-26 21:35:17.138741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.138752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.150011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.150068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.150078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.164086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.164144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.164155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.177932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.177990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.178002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.191453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.191515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.191526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.202705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.202766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.202777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.215479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.215537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.215549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.230744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.230810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.230821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.244454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.244519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.244531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.258444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.258505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.258516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.024 [2024-04-26 21:35:17.273031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.024 [2024-04-26 21:35:17.273108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.024 [2024-04-26 21:35:17.273122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.284443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.284511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.284522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.296367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.296431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.296442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.310699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.310763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.310774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.323941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.324007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.324019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.338305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.338384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.338395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.351233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.351294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.351305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.364701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.364764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.364776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.379386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.379453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.379463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.393658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.393729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.393744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.418233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.418365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.418395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.435021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.435086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.435097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.448763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.448830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.448841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.459990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.460052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.460063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.473248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.473311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.473323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.487279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.487352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.487364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.498378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.498431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.498441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.512845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.512908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.512920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.284 [2024-04-26 21:35:17.524444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.284 [2024-04-26 21:35:17.524502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.284 [2024-04-26 21:35:17.524513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.544 [2024-04-26 21:35:17.539534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.544 [2024-04-26 21:35:17.539597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.544 [2024-04-26 21:35:17.539608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.544 [2024-04-26 21:35:17.553370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.544 [2024-04-26 21:35:17.553427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.544 [2024-04-26 21:35:17.553438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.544 [2024-04-26 21:35:17.567720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.544 [2024-04-26 21:35:17.567775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.544 [2024-04-26 21:35:17.567785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.544 [2024-04-26 21:35:17.580227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.544 [2024-04-26 21:35:17.580284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.544 [2024-04-26 21:35:17.580294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.544 [2024-04-26 21:35:17.591699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.544 [2024-04-26 21:35:17.591753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.544 [2024-04-26 21:35:17.591764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.606178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.606234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.606246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.618685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.618738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.618753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.632824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.632882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.632894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.644811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.644866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.644876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.657393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.657456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.657468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.671172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.671245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.671257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.685657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.685721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.685737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.699936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.700007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.700020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.713414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.713467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.713479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.725465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.725519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.725532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.739276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.739345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.739357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.752478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.752534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.752544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.766546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.766605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.766616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.780674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.780735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.780746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.545 [2024-04-26 21:35:17.792954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.545 [2024-04-26 21:35:17.793013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.545 [2024-04-26 21:35:17.793024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.807041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.807102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.807113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.821228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.821290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.821300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.832462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.832519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.832530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.847046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.847109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.847120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.859342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.859404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.859416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.871599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.871649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.871659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.885936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.885983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.885995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.900035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.900091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.900102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.912565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.912617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.912627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.927464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.927526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.927538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.940459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.940511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.940523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.951309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.951375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.951387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.964843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.964898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.964909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.978986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.979047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.979058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:17.992388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:17.992445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:17.992455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:18.006190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:18.006246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:18.006257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:18.019983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:18.020041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:18.020052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:18.033488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.806 [2024-04-26 21:35:18.033543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.806 [2024-04-26 21:35:18.033554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.806 [2024-04-26 21:35:18.044694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:28.807 [2024-04-26 21:35:18.044745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.807 [2024-04-26 21:35:18.044756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.059178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.059230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.059241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.070986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.071049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.071064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.087238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.087298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.087308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.098855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.098909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.098920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.112814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.112873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.112884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.126971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.127034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.127049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.142589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.142647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.142657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.155856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.155915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.155926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.167393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.167469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.167482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.181800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.181857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.181868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.196071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.196131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.196141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.208110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.208171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.208183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.223831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.223886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.223901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.238202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.238273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.238287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.251865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.251929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.251940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.264279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.264349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.264361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.276959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.277020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.277031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.289846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.289906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.289918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.303094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.303150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.303161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.067 [2024-04-26 21:35:18.315402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.067 [2024-04-26 21:35:18.315456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.067 [2024-04-26 21:35:18.315467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.326 [2024-04-26 21:35:18.330142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc3d60) 00:32:29.326 [2024-04-26 21:35:18.330199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:29.326 [2024-04-26 21:35:18.330210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:29.326 00:32:29.326 Latency(us) 00:32:29.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.326 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:29.326 nvme0n1 : 2.01 18589.53 72.62 0.00 0.00 6877.84 3577.29 26328.87 00:32:29.326 =================================================================================================================== 00:32:29.326 Total : 18589.53 72.62 0.00 0.00 6877.84 3577.29 26328.87 00:32:29.326 0 00:32:29.326 21:35:18 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:29.326 21:35:18 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:29.326 21:35:18 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:29.326 | .driver_specific 00:32:29.326 | .nvme_error 00:32:29.326 | .status_code 00:32:29.326 | .command_transient_transport_error' 00:32:29.326 21:35:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:29.585 21:35:18 -- host/digest.sh@71 -- # (( 146 > 0 )) 00:32:29.585 21:35:18 -- host/digest.sh@73 -- # killprocess 104819 00:32:29.585 21:35:18 -- common/autotest_common.sh@936 -- # '[' -z 104819 ']' 00:32:29.585 21:35:18 -- common/autotest_common.sh@940 -- # kill -0 104819 00:32:29.585 21:35:18 -- common/autotest_common.sh@941 -- # uname 00:32:29.585 21:35:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:29.585 21:35:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104819 00:32:29.585 21:35:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:29.585 21:35:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:29.585 killing process with pid 104819 00:32:29.585 21:35:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104819' 00:32:29.585 21:35:18 -- common/autotest_common.sh@955 -- # kill 104819 00:32:29.585 Received shutdown signal, test time was about 2.000000 seconds 00:32:29.585 00:32:29.585 Latency(us) 00:32:29.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:29.585 =================================================================================================================== 00:32:29.585 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:29.585 21:35:18 -- common/autotest_common.sh@960 -- # wait 104819 00:32:29.585 21:35:18 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:29.585 21:35:18 -- host/digest.sh@54 -- # local rw bs qd 00:32:29.585 21:35:18 -- host/digest.sh@56 -- # rw=randread 00:32:29.585 21:35:18 -- host/digest.sh@56 -- # bs=131072 00:32:29.585 21:35:18 -- host/digest.sh@56 -- # qd=16 00:32:29.585 21:35:18 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:29.844 21:35:18 -- host/digest.sh@58 -- # bperfpid=104904 00:32:29.844 21:35:18 -- host/digest.sh@60 -- # waitforlisten 104904 /var/tmp/bperf.sock 00:32:29.844 21:35:18 -- common/autotest_common.sh@817 -- # '[' -z 104904 ']' 00:32:29.844 21:35:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:29.844 21:35:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:29.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:29.844 21:35:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:29.844 21:35:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:29.844 21:35:18 -- common/autotest_common.sh@10 -- # set +x 00:32:29.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:29.844 Zero copy mechanism will not be used. 00:32:29.844 [2024-04-26 21:35:18.873135] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:29.844 [2024-04-26 21:35:18.873212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104904 ] 00:32:29.844 [2024-04-26 21:35:19.014420] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.844 [2024-04-26 21:35:19.067442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.779 21:35:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:30.779 21:35:19 -- common/autotest_common.sh@850 -- # return 0 00:32:30.779 21:35:19 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:30.779 21:35:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:30.779 21:35:20 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:30.779 21:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:30.779 21:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.038 21:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.038 21:35:20 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.038 21:35:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.298 nvme0n1 00:32:31.298 21:35:20 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:31.298 21:35:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:31.298 21:35:20 -- common/autotest_common.sh@10 -- # set +x 00:32:31.298 21:35:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:31.298 21:35:20 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:31.298 21:35:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:31.298 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:31.298 Zero copy mechanism will not be used. 00:32:31.298 Running I/O for 2 seconds... 00:32:31.298 [2024-04-26 21:35:20.495155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.298 [2024-04-26 21:35:20.495221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.298 [2024-04-26 21:35:20.495232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.298 [2024-04-26 21:35:20.499411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.298 [2024-04-26 21:35:20.499464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.298 [2024-04-26 21:35:20.499476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.298 [2024-04-26 21:35:20.504366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.298 [2024-04-26 21:35:20.504420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.298 [2024-04-26 21:35:20.504430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.298 [2024-04-26 21:35:20.507825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.298 [2024-04-26 21:35:20.507874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.298 [2024-04-26 21:35:20.507885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.298 [2024-04-26 21:35:20.512026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.298 [2024-04-26 21:35:20.512081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.298 [2024-04-26 21:35:20.512092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.298 [2024-04-26 21:35:20.516549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.298 [2024-04-26 21:35:20.516598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.298 [2024-04-26 21:35:20.516610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.298 [2024-04-26 21:35:20.521208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.298 [2024-04-26 21:35:20.521263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.298 [2024-04-26 21:35:20.521274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.298 [2024-04-26 21:35:20.524312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.298 [2024-04-26 21:35:20.524375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.298 [2024-04-26 21:35:20.524401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.298 [2024-04-26 21:35:20.529178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.299 [2024-04-26 21:35:20.529236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.299 [2024-04-26 21:35:20.529247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.299 [2024-04-26 21:35:20.532284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.299 [2024-04-26 21:35:20.532362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.299 [2024-04-26 21:35:20.532373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.299 [2024-04-26 21:35:20.536785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.299 [2024-04-26 21:35:20.536844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.299 [2024-04-26 21:35:20.536855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.299 [2024-04-26 21:35:20.541457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.299 [2024-04-26 21:35:20.541513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.299 [2024-04-26 21:35:20.541524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.299 [2024-04-26 21:35:20.546128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.299 [2024-04-26 21:35:20.546188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.299 [2024-04-26 21:35:20.546199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.299 [2024-04-26 21:35:20.549406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.299 [2024-04-26 21:35:20.549458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.299 [2024-04-26 21:35:20.549468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.559 [2024-04-26 21:35:20.553094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.559 [2024-04-26 21:35:20.553146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.559 [2024-04-26 21:35:20.553157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.559 [2024-04-26 21:35:20.557512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.559 [2024-04-26 21:35:20.557569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.559 [2024-04-26 21:35:20.557581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.559 [2024-04-26 21:35:20.561423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.559 [2024-04-26 21:35:20.561479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.559 [2024-04-26 21:35:20.561489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.559 [2024-04-26 21:35:20.564620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.559 [2024-04-26 21:35:20.564667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.559 [2024-04-26 21:35:20.564678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.559 [2024-04-26 21:35:20.569652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.559 [2024-04-26 21:35:20.569715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.569726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.574403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.574461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.574473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.578752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.578809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.578821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.582120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.582172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.582182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.586436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.586493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.586504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.591225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.591285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.591297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.595579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.595633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.595644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.598749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.598799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.598809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.602447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.602495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.602505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.607139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.607198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.607210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.611677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.611737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.611749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.614287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.614347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.614359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.618800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.618856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.618868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.622878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.622930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.622940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.626018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.626071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.626082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.629811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.629865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.629876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.633023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.633074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.633085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.636165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.636217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.636227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.639757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.639805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.639831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.643602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.643651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.643679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.646766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.646818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.646828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.650613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.650667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.650679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.654219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.654279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.654289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.657849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.657906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.657918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.662011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.662068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.662079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.665633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.665686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.665698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.669258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.669312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.669322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.560 [2024-04-26 21:35:20.672501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.560 [2024-04-26 21:35:20.672551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.560 [2024-04-26 21:35:20.672562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.675981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.676035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.676046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.679328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.679388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.679399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.683363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.683412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.683423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.686785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.686836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.686847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.691129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.691180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.691192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.695166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.695221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.695231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.698610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.698665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.698676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.702466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.702516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.702527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.706404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.706456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.706467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.709637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.709686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.709698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.713431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.713485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.713496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.718298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.718366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.718378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.723685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.723749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.723761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.727003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.727056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.727066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.731365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.731421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.731431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.735966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.736024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.736036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.740394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.740450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.740461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.743714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.743765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.743776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.748063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.748121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.748133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.753128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.753188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.753199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.756250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.756299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.756310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.760666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.760722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.760733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.765306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.765376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.765388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.767940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.767988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.768000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.771735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.771786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.771796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.775463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.775514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.775525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.780057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.561 [2024-04-26 21:35:20.780112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.561 [2024-04-26 21:35:20.780123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.561 [2024-04-26 21:35:20.783309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.562 [2024-04-26 21:35:20.783366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.562 [2024-04-26 21:35:20.783377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.562 [2024-04-26 21:35:20.787284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.562 [2024-04-26 21:35:20.787348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.562 [2024-04-26 21:35:20.787360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.562 [2024-04-26 21:35:20.792087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.562 [2024-04-26 21:35:20.792137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.562 [2024-04-26 21:35:20.792149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.562 [2024-04-26 21:35:20.796818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.562 [2024-04-26 21:35:20.796868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.562 [2024-04-26 21:35:20.796880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.562 [2024-04-26 21:35:20.800057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.562 [2024-04-26 21:35:20.800103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.562 [2024-04-26 21:35:20.800114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.562 [2024-04-26 21:35:20.804271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.562 [2024-04-26 21:35:20.804320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.562 [2024-04-26 21:35:20.804341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.562 [2024-04-26 21:35:20.808377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.562 [2024-04-26 21:35:20.808426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.562 [2024-04-26 21:35:20.808437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.812321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.812390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.812402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.815401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.815453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.815464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.819809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.819861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.819871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.824142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.824198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.824209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.828579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.828632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.828643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.831258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.831303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.831314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.835721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.835773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.835783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.839007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.839060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.839071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.842682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.842732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.842742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.846056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.846105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.846115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.822 [2024-04-26 21:35:20.849839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.822 [2024-04-26 21:35:20.849888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.822 [2024-04-26 21:35:20.849899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.853956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.854002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.854012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.858786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.858838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.858849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.862240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.862285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.862296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.865795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.865843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.865853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.869950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.870003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.870014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.874325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.874390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.874400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.877512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.877560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.877571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.881296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.881356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.881367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.885783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.885832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.885842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.889984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.890036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.890046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.892759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.892801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.892810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.897217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.897268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.897277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.901409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.901466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.901476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.904679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.904733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.904743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.908892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.908949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.908960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.913198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.913250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.913260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.916464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.916505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.916515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.920192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.920237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.920247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.924696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.924746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.924757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.928410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.928458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.928467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.931296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.931354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.931364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.823 [2024-04-26 21:35:20.935128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.823 [2024-04-26 21:35:20.935178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.823 [2024-04-26 21:35:20.935188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.938399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.938442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.938452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.942242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.942296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.942307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.945112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.945162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.945173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.948935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.948987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.948998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.953470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.953520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.953531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.956977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.957026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.957037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.960522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.960573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.960583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.963901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.963951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.963962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.967699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.967749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.967759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.972431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.972483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.972492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.975773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.975821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.975831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.979757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.979810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.979820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.984190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.984242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.984252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.988737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.988788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.988799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.991681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.991723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.991733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.995289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.995342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.995353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:20.999370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:20.999414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:20.999423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:21.002859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:21.002904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:21.002913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:21.006236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:21.006283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:21.006293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:21.009630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:21.009677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.824 [2024-04-26 21:35:21.009686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.824 [2024-04-26 21:35:21.012582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.824 [2024-04-26 21:35:21.012624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.012632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.016254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.016299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.016308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.019804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.019848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.019858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.023382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.023422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.023433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.026780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.026827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.026838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.030590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.030648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.030659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.034757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.034822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.034834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.039452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.039505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.039517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.042836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.042886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.042896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.046715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.046767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.046778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.050768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.050822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.050832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.054547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.054594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.054605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.057863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.057913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.057923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.062519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.062573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.062583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.065796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.065844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.065854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:31.825 [2024-04-26 21:35:21.069400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:31.825 [2024-04-26 21:35:21.069449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:31.825 [2024-04-26 21:35:21.069459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.087 [2024-04-26 21:35:21.073712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.087 [2024-04-26 21:35:21.073772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.087 [2024-04-26 21:35:21.073784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.087 [2024-04-26 21:35:21.078035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.087 [2024-04-26 21:35:21.078085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.087 [2024-04-26 21:35:21.078096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.087 [2024-04-26 21:35:21.081032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.087 [2024-04-26 21:35:21.081079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.087 [2024-04-26 21:35:21.081090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.087 [2024-04-26 21:35:21.084668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.087 [2024-04-26 21:35:21.084713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.087 [2024-04-26 21:35:21.084723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.087 [2024-04-26 21:35:21.089142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.087 [2024-04-26 21:35:21.089191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.087 [2024-04-26 21:35:21.089202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.093932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.093983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.093993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.097190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.097241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.097252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.101152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.101208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.101219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.105868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.105921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.105932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.110122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.110175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.110186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.112959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.113006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.113017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.116684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.116726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.116736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.121506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.121554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.121565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.125386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.125432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.125442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.128119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.128162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.128171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.132594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.132642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.132651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.136762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.136808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.136818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.139330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.139378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.139387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.143720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.143764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.143773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.148273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.148316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.148325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.152628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.152671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.152680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.155878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.155916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.155924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.159479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.159516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.159524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.163537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.163578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.163587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.167624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.167666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.167674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.171806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.088 [2024-04-26 21:35:21.171848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.088 [2024-04-26 21:35:21.171857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.088 [2024-04-26 21:35:21.174973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.175016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.175037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.178726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.178768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.178778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.182317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.182366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.182377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.185723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.185775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.185801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.189190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.189234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.189244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.192354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.192390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.192398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.196089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.196133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.196142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.199778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.199824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.199834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.202979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.203035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.203044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.206863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.206907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.206918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.210038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.210082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.210091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.213993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.214037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.214046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.218218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.218264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.218275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.222300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.222360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.222372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.225555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.225597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.225607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.229186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.229239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.229249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.233280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.233339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.233350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.236624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.236670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.236679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.240384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.240426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.240435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.243978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.244018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.244026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.246966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.247007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.247017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.251169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.251211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.251221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.089 [2024-04-26 21:35:21.255180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.089 [2024-04-26 21:35:21.255225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.089 [2024-04-26 21:35:21.255233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.259058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.259099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.259108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.261276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.261313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.261321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.264610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.264649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.264657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.268511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.268553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.268561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.272385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.272429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.272438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.276819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.276872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.276883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.280069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.280117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.280128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.283602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.283648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.283659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.287712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.287763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.287774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.292260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.292310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.292321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.296829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.296879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.296889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.300034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.300075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.300085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.303846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.303889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.303899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.308153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.308195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.308205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.312872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.312924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.312934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.317278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.317326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.317347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.319971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.320013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.320022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.324131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.324177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.324188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.328669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.328727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.328736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.332802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.332849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.332859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.090 [2024-04-26 21:35:21.335510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.090 [2024-04-26 21:35:21.335549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.090 [2024-04-26 21:35:21.335559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.353 [2024-04-26 21:35:21.339545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.353 [2024-04-26 21:35:21.339594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.353 [2024-04-26 21:35:21.339605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.353 [2024-04-26 21:35:21.343874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.353 [2024-04-26 21:35:21.343922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.353 [2024-04-26 21:35:21.343931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.353 [2024-04-26 21:35:21.348363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.353 [2024-04-26 21:35:21.348426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.353 [2024-04-26 21:35:21.348437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.353 [2024-04-26 21:35:21.352397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.353 [2024-04-26 21:35:21.352441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.353 [2024-04-26 21:35:21.352451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.353 [2024-04-26 21:35:21.355720] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.353 [2024-04-26 21:35:21.355760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.353 [2024-04-26 21:35:21.355770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.359794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.359838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.359848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.364576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.364628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.364639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.367641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.367688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.367699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.371346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.371394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.371410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.375017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.375066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.375077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.378504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.378555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.378566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.381649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.381700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.381710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.386175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.386227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.386237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.391199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.391255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.391266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.396171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.396229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.396239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.399145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.399196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.399206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.403749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.403805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.403817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.408181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.408241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.408252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.412771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.412824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.412836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.415859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.415908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.415919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.419646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.419696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.419707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.423757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.423806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.423816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.426470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.426514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.426524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.430097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.430144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.430154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.433514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.433559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.433567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.436844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.436907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.436917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.440031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.440075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.440083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.443850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.443893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.443902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.447609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.447653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.447662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.450856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.450904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.450914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.454333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.454391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.454400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.354 [2024-04-26 21:35:21.458176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.354 [2024-04-26 21:35:21.458222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.354 [2024-04-26 21:35:21.458232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.461116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.461159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.461167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.464657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.464701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.464709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.467743] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.467785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.467794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.471953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.471996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.472004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.475961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.476007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.476017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.480129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.480181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.480196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.482968] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.483013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.483023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.487873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.487928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.487938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.493079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.493139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.493151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.498088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.498147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.498158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.501236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.501283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.501294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.505298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.505358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.505369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.509832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.509885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.509895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.513541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.513590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.513600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.517177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.517233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.517243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.520241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.520288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.520298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.523755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.523802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.523812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.527520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.527574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.527585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.531136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.531188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.531199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.534338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.534402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.534414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.537691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.537747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.537768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.541242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.541292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.541302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.544740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.544787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.544797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.548070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.548117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.548127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.551767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.551812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.551821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.555283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.555339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.355 [2024-04-26 21:35:21.555350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.355 [2024-04-26 21:35:21.558332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.355 [2024-04-26 21:35:21.558385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.558394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.561209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.561247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.561256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.565126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.565169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.565178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.569243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.569291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.569300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.572324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.572374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.572383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.576051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.576098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.576107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.580406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.580454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.580464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.584780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.584833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.584842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.589002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.589054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.589064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.591474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.591513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.591522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.595698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.595741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.595750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.356 [2024-04-26 21:35:21.600413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.356 [2024-04-26 21:35:21.600459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.356 [2024-04-26 21:35:21.600469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.603551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.603588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.603597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.607012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.607057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.607066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.611366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.611409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.611419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.615760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.615807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.615818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.618919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.618966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.618976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.622500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.622547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.622557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.626247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.626293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.626303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.629641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.629687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.629697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.633002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.633048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.633058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.636188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.636240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.636251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.640220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.640274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.640284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.643978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.644030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.644041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.647889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.647943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.647954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.650771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.650820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.650830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.654244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.654295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.628 [2024-04-26 21:35:21.654306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.628 [2024-04-26 21:35:21.658068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.628 [2024-04-26 21:35:21.658117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.658128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.661207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.661253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.661263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.665030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.665079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.665106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.669103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.669157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.669167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.673433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.673486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.673496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.677624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.677678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.677688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.680501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.680551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.680561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.685205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.685263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.685274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.689105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.689159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.689169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.692559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.692613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.692624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.696020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.696072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.696084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.699436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.699486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.699512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.703006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.703061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.703073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.706772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.706827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.706838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.709936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.709991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.710002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.713317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.713375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.713401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.717168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.717211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.717237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.720125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.720168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.720176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.723514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.723559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.723570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.727346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.727388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.727397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.730732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.730779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.730789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.734481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.734526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.734536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.738522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.738571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.738581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.741501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.741537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.741546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.744912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.744954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.744964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.748582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.629 [2024-04-26 21:35:21.748629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.629 [2024-04-26 21:35:21.748639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.629 [2024-04-26 21:35:21.751634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.751677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.751687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.755692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.755742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.755753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.758937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.758984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.758994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.762266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.762311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.762322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.766592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.766642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.766653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.771311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.771378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.771389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.774586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.774634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.774644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.778886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.778935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.778947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.783567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.783621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.783633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.787931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.787981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.787992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.792010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.792059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.792069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.795016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.795061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.795072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.798622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.798670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.798681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.802680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.802732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.802743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.806931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.806982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.806994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.810059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.810109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.810121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.814126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.814175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.814186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.817981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.818027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.818038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.820927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.820975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.820986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.825051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.825103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.825115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.829468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.829525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.829538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.834320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.834383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.834395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.839137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.839191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.839219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.842592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.842644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.842656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.846683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.846734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.846745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.850775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.630 [2024-04-26 21:35:21.850828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.630 [2024-04-26 21:35:21.850838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.630 [2024-04-26 21:35:21.855624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.631 [2024-04-26 21:35:21.855676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.631 [2024-04-26 21:35:21.855686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.631 [2024-04-26 21:35:21.859401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.631 [2024-04-26 21:35:21.859466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.631 [2024-04-26 21:35:21.859477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.631 [2024-04-26 21:35:21.862246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.631 [2024-04-26 21:35:21.862296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.631 [2024-04-26 21:35:21.862308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.631 [2024-04-26 21:35:21.866103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.631 [2024-04-26 21:35:21.866156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.631 [2024-04-26 21:35:21.866167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.631 [2024-04-26 21:35:21.869692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.631 [2024-04-26 21:35:21.869744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.631 [2024-04-26 21:35:21.869764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.631 [2024-04-26 21:35:21.873586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.631 [2024-04-26 21:35:21.873639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.631 [2024-04-26 21:35:21.873650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.631 [2024-04-26 21:35:21.877315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.631 [2024-04-26 21:35:21.877388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.631 [2024-04-26 21:35:21.877400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.891 [2024-04-26 21:35:21.881047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.891 [2024-04-26 21:35:21.881108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.891 [2024-04-26 21:35:21.881118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.891 [2024-04-26 21:35:21.885226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.891 [2024-04-26 21:35:21.885284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.891 [2024-04-26 21:35:21.885296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.891 [2024-04-26 21:35:21.889134] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.891 [2024-04-26 21:35:21.889188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.891 [2024-04-26 21:35:21.889199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.891 [2024-04-26 21:35:21.893008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.891 [2024-04-26 21:35:21.893066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.891 [2024-04-26 21:35:21.893077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.891 [2024-04-26 21:35:21.896763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.891 [2024-04-26 21:35:21.896820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.891 [2024-04-26 21:35:21.896846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.891 [2024-04-26 21:35:21.900953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.891 [2024-04-26 21:35:21.901012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.891 [2024-04-26 21:35:21.901023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.891 [2024-04-26 21:35:21.904267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.891 [2024-04-26 21:35:21.904317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.891 [2024-04-26 21:35:21.904339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.891 [2024-04-26 21:35:21.907746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.891 [2024-04-26 21:35:21.907795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.907806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.911828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.911877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.911888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.915257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.915309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.915319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.918741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.918797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.918807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.922273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.922325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.922352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.926430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.926479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.926490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.929363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.929408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.929418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.933132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.933179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.933190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.936269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.936316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.936326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.940570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.940616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.940627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.945236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.945289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.945300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.949716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.949800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.949812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.953136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.953188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.953199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.957269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.957322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.957346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.962145] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.962208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.962220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.965127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.965175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.965202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.969414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.969466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.969494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.974547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.974600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.974612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.978969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.979023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.979034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.982024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.982067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.982078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.986067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.986114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.986125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.990704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.990758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.990770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.995371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.995424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.995434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:21.999276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:21.999341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:21.999355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:22.002271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:22.002323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:22.002348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:22.006775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:22.006832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:22.006843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:22.011806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:22.011872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.892 [2024-04-26 21:35:22.011884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.892 [2024-04-26 21:35:22.016203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.892 [2024-04-26 21:35:22.016262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.016273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.018738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.018785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.018796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.023053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.023113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.023125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.027437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.027493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.027504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.031550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.031606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.031616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.034538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.034590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.034601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.039372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.039421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.039431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.043979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.044033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.044044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.048774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.048834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.048845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.051991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.052047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.052058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.055990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.056045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.056056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.060643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.060699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.060712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.065341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.065403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.065414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.070185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.070243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.070255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.072977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.073028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.073038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.076939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.076994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.077005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.081061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.081116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.081128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.085484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.085542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.085553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.088784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.088837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.088864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.092573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.092635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.092645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.097199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.097265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.097276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.100182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.100242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.100253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.104315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.104391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.104401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.109559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.109645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.109656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.114225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.114295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.114307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.117734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.117815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.117826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.121960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.122018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.893 [2024-04-26 21:35:22.122029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.893 [2024-04-26 21:35:22.125672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.893 [2024-04-26 21:35:22.125726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.894 [2024-04-26 21:35:22.125737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:32.894 [2024-04-26 21:35:22.129029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.894 [2024-04-26 21:35:22.129078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.894 [2024-04-26 21:35:22.129088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:32.894 [2024-04-26 21:35:22.132455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.894 [2024-04-26 21:35:22.132505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.894 [2024-04-26 21:35:22.132516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:32.894 [2024-04-26 21:35:22.136372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.894 [2024-04-26 21:35:22.136426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.894 [2024-04-26 21:35:22.136435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:32.894 [2024-04-26 21:35:22.140182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:32.894 [2024-04-26 21:35:22.140239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.894 [2024-04-26 21:35:22.140269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.155 [2024-04-26 21:35:22.143359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.143409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.143419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.147598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.147654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.147666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.151677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.151731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.151743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.154970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.155022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.155032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.159697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.159752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.159763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.164576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.164632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.164642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.167891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.167941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.167952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.171935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.171989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.172015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.175888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.175940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.175966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.180500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.180561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.180572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.185240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.185310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.185326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.188918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.188982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.188994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.193419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.193478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.193489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.198099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.198160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.198170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.202555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.202618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.202629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.206754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.206810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.206821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.209829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.209879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.209890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.214184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.214239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.214250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.217728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.217790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.217801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.221236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.221289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.221300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.224575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.224620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.224646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.228249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.228296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.228307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.232844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.232893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.232918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.236797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.236850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.236860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.240108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.240157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.240182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.243745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.243793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.243819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.247844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.156 [2024-04-26 21:35:22.247895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.156 [2024-04-26 21:35:22.247905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.156 [2024-04-26 21:35:22.251212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.251260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.251286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.254773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.254822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.254833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.258569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.258614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.258624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.262018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.262063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.262090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.265733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.265814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.265825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.269465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.269514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.269526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.273176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.273226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.273236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.276686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.276737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.276765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.280057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.280105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.280114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.283805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.283852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.283877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.287312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.287381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.287390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.291012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.291058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.291068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.294138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.294189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.294200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.297731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.297796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.297806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.301021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.301076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.301087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.304617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.304677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.304704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.308722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.308807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.308819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.312736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.312818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.312829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.316352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.316411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.316439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.320109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.320167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.320178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.324318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.324382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.324410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.327386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.327438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.327450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.330764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.330812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.330822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.334646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.334696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.334707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.338059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.338110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.338137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.341852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.341902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.341913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.344977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.345027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.157 [2024-04-26 21:35:22.345038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.157 [2024-04-26 21:35:22.348779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.157 [2024-04-26 21:35:22.348830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.348841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.352483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.352530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.352556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.355672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.355718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.355744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.359376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.359425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.359434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.362823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.362871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.362882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.365687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.365730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.365764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.369491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.369540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.369551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.373104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.373155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.373165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.377042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.377092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.377118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.381175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.381220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.381231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.383612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.383649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.383657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.387347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.387393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.387402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.391852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.391903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.391929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.396219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.396271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.396299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.399496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.399544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.399554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.158 [2024-04-26 21:35:22.403400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.158 [2024-04-26 21:35:22.403449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.158 [2024-04-26 21:35:22.403459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.418 [2024-04-26 21:35:22.407845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.418 [2024-04-26 21:35:22.407900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.418 [2024-04-26 21:35:22.407910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.418 [2024-04-26 21:35:22.412204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.418 [2024-04-26 21:35:22.412267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.418 [2024-04-26 21:35:22.412278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.418 [2024-04-26 21:35:22.415924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.418 [2024-04-26 21:35:22.415978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.418 [2024-04-26 21:35:22.416005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.418 [2024-04-26 21:35:22.419001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.418 [2024-04-26 21:35:22.419055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.418 [2024-04-26 21:35:22.419066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.418 [2024-04-26 21:35:22.423203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.423261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.423289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.428042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.428104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.428115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.431228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.431281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.431291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.434815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.434865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.434893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.439327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.439392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.439403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.444250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.444309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.444321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.447599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.447649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.447659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.451624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.451676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.451686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.455717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.455780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.455792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.459053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.459114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.459126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.462626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.462685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.462697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.466508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.466570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.466581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.470155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.470216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.470226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.473194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.473248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.473258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.477051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.477101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.477111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:33.419 [2024-04-26 21:35:22.480403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7e60) 00:32:33.419 [2024-04-26 21:35:22.480452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:33.419 [2024-04-26 21:35:22.480462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:33.419 00:32:33.419 Latency(us) 00:32:33.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.419 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:33.419 nvme0n1 : 2.00 8043.50 1005.44 0.00 0.00 1985.82 525.86 8413.79 00:32:33.419 =================================================================================================================== 00:32:33.419 Total : 8043.50 1005.44 0.00 0.00 1985.82 525.86 8413.79 00:32:33.419 0 00:32:33.419 21:35:22 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:33.419 21:35:22 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:33.419 21:35:22 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:33.419 | .driver_specific 00:32:33.419 | .nvme_error 00:32:33.419 | .status_code 00:32:33.419 | .command_transient_transport_error' 00:32:33.419 21:35:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:33.678 21:35:22 -- host/digest.sh@71 -- # (( 519 > 0 )) 00:32:33.678 21:35:22 -- host/digest.sh@73 -- # killprocess 104904 00:32:33.678 21:35:22 -- common/autotest_common.sh@936 -- # '[' -z 104904 ']' 00:32:33.678 21:35:22 -- common/autotest_common.sh@940 -- # kill -0 104904 00:32:33.678 21:35:22 -- common/autotest_common.sh@941 -- # uname 00:32:33.678 21:35:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:33.678 21:35:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104904 00:32:33.678 21:35:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:33.678 21:35:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:33.678 killing process with pid 104904 00:32:33.678 21:35:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104904' 00:32:33.678 21:35:22 -- common/autotest_common.sh@955 -- # kill 104904 00:32:33.678 Received shutdown signal, test time was about 2.000000 seconds 00:32:33.678 00:32:33.678 Latency(us) 00:32:33.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.678 =================================================================================================================== 00:32:33.678 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.678 21:35:22 -- common/autotest_common.sh@960 -- # wait 104904 00:32:33.936 21:35:22 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:33.936 21:35:22 -- host/digest.sh@54 -- # local rw bs qd 00:32:33.936 21:35:22 -- host/digest.sh@56 -- # rw=randwrite 00:32:33.936 21:35:22 -- host/digest.sh@56 -- # bs=4096 00:32:33.936 21:35:22 -- host/digest.sh@56 -- # qd=128 00:32:33.936 21:35:22 -- host/digest.sh@58 -- # bperfpid=104989 00:32:33.936 21:35:22 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:33.936 21:35:22 -- host/digest.sh@60 -- # waitforlisten 104989 /var/tmp/bperf.sock 00:32:33.936 21:35:22 -- common/autotest_common.sh@817 -- # '[' -z 104989 ']' 00:32:33.936 21:35:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:33.936 21:35:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:33.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:33.936 21:35:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:33.936 21:35:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:33.936 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:32:33.936 [2024-04-26 21:35:22.999020] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:33.936 [2024-04-26 21:35:22.999090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104989 ] 00:32:33.936 [2024-04-26 21:35:23.139636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.194 [2024-04-26 21:35:23.194085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.763 21:35:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:34.763 21:35:23 -- common/autotest_common.sh@850 -- # return 0 00:32:34.763 21:35:23 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:34.763 21:35:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:35.022 21:35:24 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:35.022 21:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.022 21:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.022 21:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.022 21:35:24 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.022 21:35:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:35.281 nvme0n1 00:32:35.281 21:35:24 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:35.281 21:35:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:35.281 21:35:24 -- common/autotest_common.sh@10 -- # set +x 00:32:35.281 21:35:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:35.281 21:35:24 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:35.281 21:35:24 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:35.541 Running I/O for 2 seconds... 00:32:35.541 [2024-04-26 21:35:24.595807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ee5c8 00:32:35.541 [2024-04-26 21:35:24.596666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.596704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.609367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e9e10 00:32:35.541 [2024-04-26 21:35:24.610925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.610972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.619290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190df118 00:32:35.541 [2024-04-26 21:35:24.620476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.620517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.631934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eb328 00:32:35.541 [2024-04-26 21:35:24.633824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.633861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.639612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e3d08 00:32:35.541 [2024-04-26 21:35:24.640530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.640565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.650661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fef90 00:32:35.541 [2024-04-26 21:35:24.651814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.651856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.661613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fd208 00:32:35.541 [2024-04-26 21:35:24.662871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.662908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.673073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f2510 00:32:35.541 [2024-04-26 21:35:24.674506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.674543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.682104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fc998 00:32:35.541 [2024-04-26 21:35:24.683562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.683595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.693341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e8d30 00:32:35.541 [2024-04-26 21:35:24.694495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.694530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.703960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fef90 00:32:35.541 [2024-04-26 21:35:24.705062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.705100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.713852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fc998 00:32:35.541 [2024-04-26 21:35:24.714939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.714973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.723613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fef90 00:32:35.541 [2024-04-26 21:35:24.724379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.724412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.734598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e23b8 00:32:35.541 [2024-04-26 21:35:24.735662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.735708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.747235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190feb58 00:32:35.541 [2024-04-26 21:35:24.748501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.748535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.757244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eea00 00:32:35.541 [2024-04-26 21:35:24.758505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.758536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.768542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f6458 00:32:35.541 [2024-04-26 21:35:24.769925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.769959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.779909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ef6a8 00:32:35.541 [2024-04-26 21:35:24.781438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.781471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:35.541 [2024-04-26 21:35:24.787603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ebfd0 00:32:35.541 [2024-04-26 21:35:24.788290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.541 [2024-04-26 21:35:24.788325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.800347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190dece0 00:32:35.801 [2024-04-26 21:35:24.801644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.801679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.810629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fc998 00:32:35.801 [2024-04-26 21:35:24.811537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.811571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.823189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f7100 00:32:35.801 [2024-04-26 21:35:24.824722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.824766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.833903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fbcf0 00:32:35.801 [2024-04-26 21:35:24.835285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.835326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.844747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190de8a8 00:32:35.801 [2024-04-26 21:35:24.846123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.846158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.855200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fc560 00:32:35.801 [2024-04-26 21:35:24.856164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.856203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.866442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f5be8 00:32:35.801 [2024-04-26 21:35:24.867528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.867566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.878614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ecc78 00:32:35.801 [2024-04-26 21:35:24.879872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.879911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.889895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e7c50 00:32:35.801 [2024-04-26 21:35:24.891140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.891179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.902468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e9e10 00:32:35.801 [2024-04-26 21:35:24.904243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.904281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.913871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f20d8 00:32:35.801 [2024-04-26 21:35:24.915616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.801 [2024-04-26 21:35:24.915652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:35.801 [2024-04-26 21:35:24.924886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e9168 00:32:35.802 [2024-04-26 21:35:24.926667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:24.926722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:24.936556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190df118 00:32:35.802 [2024-04-26 21:35:24.938281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:24.938319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:24.947204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f1ca0 00:32:35.802 [2024-04-26 21:35:24.948684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:24.948719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:24.957701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ebb98 00:32:35.802 [2024-04-26 21:35:24.959123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:24.959161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:24.968776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f0350 00:32:35.802 [2024-04-26 21:35:24.970089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:24.970129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:24.979673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fa3a0 00:32:35.802 [2024-04-26 21:35:24.980450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:24.980489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:24.990419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e95a0 00:32:35.802 [2024-04-26 21:35:24.991401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:24.991441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:25.001149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f9b30 00:32:35.802 [2024-04-26 21:35:25.002079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:25.002116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:25.014040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fac10 00:32:35.802 [2024-04-26 21:35:25.015777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:25.015815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:25.022267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e73e0 00:32:35.802 [2024-04-26 21:35:25.023058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:25.023098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:25.035365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ee5c8 00:32:35.802 [2024-04-26 21:35:25.036653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:25.036692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:35.802 [2024-04-26 21:35:25.046263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fac10 00:32:35.802 [2024-04-26 21:35:25.047197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:35.802 [2024-04-26 21:35:25.047236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.057894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e4140 00:32:36.063 [2024-04-26 21:35:25.059160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.059209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.069957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e12d8 00:32:36.063 [2024-04-26 21:35:25.071730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.071761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.081065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ecc78 00:32:36.063 [2024-04-26 21:35:25.082866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.082905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.089295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f20d8 00:32:36.063 [2024-04-26 21:35:25.090127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.090162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.102365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f9f68 00:32:36.063 [2024-04-26 21:35:25.103656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.103694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.114555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e3d08 00:32:36.063 [2024-04-26 21:35:25.116318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.116361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.125604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e99d8 00:32:36.063 [2024-04-26 21:35:25.127249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.127289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.135999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fef90 00:32:36.063 [2024-04-26 21:35:25.137261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.137313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.147771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190df550 00:32:36.063 [2024-04-26 21:35:25.149087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.149132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.161745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e6fa8 00:32:36.063 [2024-04-26 21:35:25.163754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.063 [2024-04-26 21:35:25.163800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.063 [2024-04-26 21:35:25.169888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fef90 00:32:36.064 [2024-04-26 21:35:25.170908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.170960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.183166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f5378 00:32:36.064 [2024-04-26 21:35:25.184614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.184661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.193044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fda78 00:32:36.064 [2024-04-26 21:35:25.194396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.194441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.203518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f20d8 00:32:36.064 [2024-04-26 21:35:25.204643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.204684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.214109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f3e60 00:32:36.064 [2024-04-26 21:35:25.215146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.215187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.225230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f0788 00:32:36.064 [2024-04-26 21:35:25.226134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.226174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.237301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190dfdc0 00:32:36.064 [2024-04-26 21:35:25.238662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.238700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.247085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f31b8 00:32:36.064 [2024-04-26 21:35:25.248580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.248616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.258139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ddc00 00:32:36.064 [2024-04-26 21:35:25.259104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.259143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.268168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ef270 00:32:36.064 [2024-04-26 21:35:25.269199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.269235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.278804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fac10 00:32:36.064 [2024-04-26 21:35:25.279314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.279358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.290748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ee190 00:32:36.064 [2024-04-26 21:35:25.292025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.292061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.301275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fef90 00:32:36.064 [2024-04-26 21:35:25.302802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.302838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.064 [2024-04-26 21:35:25.311758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e88f8 00:32:36.064 [2024-04-26 21:35:25.313020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.064 [2024-04-26 21:35:25.313064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:36.323 [2024-04-26 21:35:25.322758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f0bc0 00:32:36.323 [2024-04-26 21:35:25.323794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.323 [2024-04-26 21:35:25.323833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.333085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eee38 00:32:36.324 [2024-04-26 21:35:25.334120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.334157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.344681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f4f40 00:32:36.324 [2024-04-26 21:35:25.345913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.345958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.358432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f0bc0 00:32:36.324 [2024-04-26 21:35:25.360353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.360395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.366528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e8d30 00:32:36.324 [2024-04-26 21:35:25.367457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.367493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.380364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fac10 00:32:36.324 [2024-04-26 21:35:25.381824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.381867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.389230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ec840 00:32:36.324 [2024-04-26 21:35:25.390007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.390046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.403507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e5220 00:32:36.324 [2024-04-26 21:35:25.405252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.405291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.414393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190feb58 00:32:36.324 [2024-04-26 21:35:25.416153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.416201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.422408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f3a28 00:32:36.324 [2024-04-26 21:35:25.423183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.423220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.436076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e01f8 00:32:36.324 [2024-04-26 21:35:25.437562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.437602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.446885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ee5c8 00:32:36.324 [2024-04-26 21:35:25.448127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.448173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.458137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190de8a8 00:32:36.324 [2024-04-26 21:35:25.459309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.459358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.469732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f6cc8 00:32:36.324 [2024-04-26 21:35:25.470432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.470472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.480951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f2d80 00:32:36.324 [2024-04-26 21:35:25.481962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.482004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.492110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e7c50 00:32:36.324 [2024-04-26 21:35:25.492993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.493035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.502715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e23b8 00:32:36.324 [2024-04-26 21:35:25.503424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.503476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.516117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f4b08 00:32:36.324 [2024-04-26 21:35:25.517009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.517052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.527038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e0ea0 00:32:36.324 [2024-04-26 21:35:25.527797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.527838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.537358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e8d30 00:32:36.324 [2024-04-26 21:35:25.538343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.538401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.550951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f4f40 00:32:36.324 [2024-04-26 21:35:25.552530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.552572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.560988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fb8b8 00:32:36.324 [2024-04-26 21:35:25.562676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.562725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.324 [2024-04-26 21:35:25.573216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eb328 00:32:36.324 [2024-04-26 21:35:25.574206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.324 [2024-04-26 21:35:25.574256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:36.584 [2024-04-26 21:35:25.583967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e1710 00:32:36.584 [2024-04-26 21:35:25.584719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.584 [2024-04-26 21:35:25.584759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:36.584 [2024-04-26 21:35:25.594548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e7818 00:32:36.584 [2024-04-26 21:35:25.595201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.584 [2024-04-26 21:35:25.595242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:36.584 [2024-04-26 21:35:25.607919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e38d0 00:32:36.584 [2024-04-26 21:35:25.609484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.584 [2024-04-26 21:35:25.609549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:36.584 [2024-04-26 21:35:25.617618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f4f40 00:32:36.585 [2024-04-26 21:35:25.618573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.618613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.628753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fdeb0 00:32:36.585 [2024-04-26 21:35:25.629680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.629718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.638938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ebfd0 00:32:36.585 [2024-04-26 21:35:25.639859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.639896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.650817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ee5c8 00:32:36.585 [2024-04-26 21:35:25.651900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.651936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.664058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e8088 00:32:36.585 [2024-04-26 21:35:25.665820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.665854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.675260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f0788 00:32:36.585 [2024-04-26 21:35:25.677037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.677072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.686286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f9b30 00:32:36.585 [2024-04-26 21:35:25.687981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.688019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.696208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e99d8 00:32:36.585 [2024-04-26 21:35:25.697952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.697989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.708416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ddc00 00:32:36.585 [2024-04-26 21:35:25.709416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.709454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.719396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f5378 00:32:36.585 [2024-04-26 21:35:25.720227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.720269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.730111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fdeb0 00:32:36.585 [2024-04-26 21:35:25.730792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.730830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.740417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e3060 00:32:36.585 [2024-04-26 21:35:25.741178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.741214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.753647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f7970 00:32:36.585 [2024-04-26 21:35:25.755140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.755180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.764574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ef270 00:32:36.585 [2024-04-26 21:35:25.765507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.765544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.775070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f20d8 00:32:36.585 [2024-04-26 21:35:25.775935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.775974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.788354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190de470 00:32:36.585 [2024-04-26 21:35:25.790336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.790378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.796651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e7818 00:32:36.585 [2024-04-26 21:35:25.797687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.797724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.809824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190efae0 00:32:36.585 [2024-04-26 21:35:25.811347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.811382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.820900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e8088 00:32:36.585 [2024-04-26 21:35:25.822529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.822568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:36.585 [2024-04-26 21:35:25.832994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e0a68 00:32:36.585 [2024-04-26 21:35:25.834582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.585 [2024-04-26 21:35:25.834637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.842676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eb328 00:32:36.844 [2024-04-26 21:35:25.843525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.843560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.854116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eb328 00:32:36.844 [2024-04-26 21:35:25.855074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.855109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.865539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e12d8 00:32:36.844 [2024-04-26 21:35:25.866206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.866245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.878888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fa7d8 00:32:36.844 [2024-04-26 21:35:25.880709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.880741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.886425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eea00 00:32:36.844 [2024-04-26 21:35:25.887349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.887376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.898865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190de8a8 00:32:36.844 [2024-04-26 21:35:25.900248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.900293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.911071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e73e0 00:32:36.844 [2024-04-26 21:35:25.912799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.912835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.918635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fc998 00:32:36.844 [2024-04-26 21:35:25.919284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.919317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.929410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f0ff8 00:32:36.844 [2024-04-26 21:35:25.930617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.930655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.939458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f81e0 00:32:36.844 [2024-04-26 21:35:25.940769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.940802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.950597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f2510 00:32:36.844 [2024-04-26 21:35:25.951771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.951808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.963398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e0630 00:32:36.844 [2024-04-26 21:35:25.964931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.964967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.971016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f81e0 00:32:36.844 [2024-04-26 21:35:25.971642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.971675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.982676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f92c0 00:32:36.844 [2024-04-26 21:35:25.983775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.983812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:25.992236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f0ff8 00:32:36.844 [2024-04-26 21:35:25.993123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:25.993162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:26.002482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f4b08 00:32:36.844 [2024-04-26 21:35:26.003290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:26.003325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:26.013630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f7da8 00:32:36.844 [2024-04-26 21:35:26.015037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:26.015072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:26.024141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e0630 00:32:36.844 [2024-04-26 21:35:26.025566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:26.025600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:26.034281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f5378 00:32:36.844 [2024-04-26 21:35:26.035656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:26.035703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:26.044778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e84c0 00:32:36.844 [2024-04-26 21:35:26.045932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:26.045969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:26.055756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e9e10 00:32:36.844 [2024-04-26 21:35:26.056594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.844 [2024-04-26 21:35:26.056628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:36.844 [2024-04-26 21:35:26.065734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f1430 00:32:36.844 [2024-04-26 21:35:26.066426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.845 [2024-04-26 21:35:26.066460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:36.845 [2024-04-26 21:35:26.075591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f3a28 00:32:36.845 [2024-04-26 21:35:26.076511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.845 [2024-04-26 21:35:26.076542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:36.845 [2024-04-26 21:35:26.085052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190de038 00:32:36.845 [2024-04-26 21:35:26.085770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:36.845 [2024-04-26 21:35:26.085816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.095690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e8088 00:32:37.104 [2024-04-26 21:35:26.096179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.096217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.109174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ed0b0 00:32:37.104 [2024-04-26 21:35:26.111077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.111118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.116884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e88f8 00:32:37.104 [2024-04-26 21:35:26.117569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.117607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.127070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fbcf0 00:32:37.104 [2024-04-26 21:35:26.127802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.127837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.139534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eff18 00:32:37.104 [2024-04-26 21:35:26.140829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.140874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.150165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190de038 00:32:37.104 [2024-04-26 21:35:26.151158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.151198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.162943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f2510 00:32:37.104 [2024-04-26 21:35:26.163881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.163919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.174491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e5a90 00:32:37.104 [2024-04-26 21:35:26.175546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.175586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.185720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fc128 00:32:37.104 [2024-04-26 21:35:26.186889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.186929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.196788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e9e10 00:32:37.104 [2024-04-26 21:35:26.197709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.197751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.210007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fc560 00:32:37.104 [2024-04-26 21:35:26.211734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.211773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.221608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f4298 00:32:37.104 [2024-04-26 21:35:26.223497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.223534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.229478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e6fa8 00:32:37.104 [2024-04-26 21:35:26.230212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.230250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.242286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e49b0 00:32:37.104 [2024-04-26 21:35:26.243775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.243818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.252274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f9b30 00:32:37.104 [2024-04-26 21:35:26.253459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.253508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.263812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f35f0 00:32:37.104 [2024-04-26 21:35:26.265026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.265063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.274732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eea00 00:32:37.104 [2024-04-26 21:35:26.275526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.275564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.286028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f7100 00:32:37.104 [2024-04-26 21:35:26.287117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.287163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.298877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190de470 00:32:37.104 [2024-04-26 21:35:26.300405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.300447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.309518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f7970 00:32:37.104 [2024-04-26 21:35:26.311137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.104 [2024-04-26 21:35:26.311182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:37.104 [2024-04-26 21:35:26.320301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eff18 00:32:37.104 [2024-04-26 21:35:26.321660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.105 [2024-04-26 21:35:26.321702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:37.105 [2024-04-26 21:35:26.330653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eaef0 00:32:37.105 [2024-04-26 21:35:26.331992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.105 [2024-04-26 21:35:26.332032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:37.105 [2024-04-26 21:35:26.344046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f0350 00:32:37.105 [2024-04-26 21:35:26.345991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.105 [2024-04-26 21:35:26.346033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:37.105 [2024-04-26 21:35:26.351965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f35f0 00:32:37.105 [2024-04-26 21:35:26.352934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.105 [2024-04-26 21:35:26.352978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.364985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e7c50 00:32:37.365 [2024-04-26 21:35:26.366596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.366641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.375736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e0ea0 00:32:37.365 [2024-04-26 21:35:26.376975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.377019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.386285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e88f8 00:32:37.365 [2024-04-26 21:35:26.387656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.387691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.397076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190de038 00:32:37.365 [2024-04-26 21:35:26.398418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.398452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.407711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e5220 00:32:37.365 [2024-04-26 21:35:26.408908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.408946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.417816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f8a50 00:32:37.365 [2024-04-26 21:35:26.418922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.418958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.427823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fb8b8 00:32:37.365 [2024-04-26 21:35:26.428994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.429030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.439875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e99d8 00:32:37.365 [2024-04-26 21:35:26.441185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.441225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.450461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e5658 00:32:37.365 [2024-04-26 21:35:26.451638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.451688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.461661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f2948 00:32:37.365 [2024-04-26 21:35:26.462537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.462575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.472864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f7970 00:32:37.365 [2024-04-26 21:35:26.474030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.474064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.484931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f7970 00:32:37.365 [2024-04-26 21:35:26.486553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.486586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.495391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190fbcf0 00:32:37.365 [2024-04-26 21:35:26.496980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.497013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.507300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f6890 00:32:37.365 [2024-04-26 21:35:26.509523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.509559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.517039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f0ff8 00:32:37.365 [2024-04-26 21:35:26.518268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.518307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.528026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ebb98 00:32:37.365 [2024-04-26 21:35:26.528663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.528700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.538689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190eea00 00:32:37.365 [2024-04-26 21:35:26.539656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.365 [2024-04-26 21:35:26.539692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:37.365 [2024-04-26 21:35:26.549253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190e3d08 00:32:37.366 [2024-04-26 21:35:26.550113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.366 [2024-04-26 21:35:26.550150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:37.366 [2024-04-26 21:35:26.559643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190dece0 00:32:37.366 [2024-04-26 21:35:26.560317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.366 [2024-04-26 21:35:26.560360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:37.366 [2024-04-26 21:35:26.572500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190f5be8 00:32:37.366 [2024-04-26 21:35:26.573836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.366 [2024-04-26 21:35:26.573875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:37.366 [2024-04-26 21:35:26.582573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a61e80) with pdu=0x2000190ed4e8 00:32:37.366 [2024-04-26 21:35:26.583654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.366 [2024-04-26 21:35:26.583686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:37.366 00:32:37.366 Latency(us) 00:32:37.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.366 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.366 nvme0n1 : 2.00 23048.15 90.03 0.00 0.00 5545.97 2074.83 15224.96 00:32:37.366 =================================================================================================================== 00:32:37.366 Total : 23048.15 90.03 0.00 0.00 5545.97 2074.83 15224.96 00:32:37.366 0 00:32:37.366 21:35:26 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:37.366 21:35:26 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:37.366 21:35:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:37.366 21:35:26 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:37.366 | .driver_specific 00:32:37.366 | .nvme_error 00:32:37.366 | .status_code 00:32:37.366 | .command_transient_transport_error' 00:32:37.934 21:35:26 -- host/digest.sh@71 -- # (( 181 > 0 )) 00:32:37.934 21:35:26 -- host/digest.sh@73 -- # killprocess 104989 00:32:37.934 21:35:26 -- common/autotest_common.sh@936 -- # '[' -z 104989 ']' 00:32:37.934 21:35:26 -- common/autotest_common.sh@940 -- # kill -0 104989 00:32:37.934 21:35:26 -- common/autotest_common.sh@941 -- # uname 00:32:37.934 21:35:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:37.934 21:35:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104989 00:32:37.934 21:35:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:37.934 21:35:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:37.934 killing process with pid 104989 00:32:37.934 21:35:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104989' 00:32:37.934 Received shutdown signal, test time was about 2.000000 seconds 00:32:37.934 00:32:37.934 Latency(us) 00:32:37.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.934 =================================================================================================================== 00:32:37.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.934 21:35:26 -- common/autotest_common.sh@955 -- # kill 104989 00:32:37.934 21:35:26 -- common/autotest_common.sh@960 -- # wait 104989 00:32:37.934 21:35:27 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:37.934 21:35:27 -- host/digest.sh@54 -- # local rw bs qd 00:32:37.934 21:35:27 -- host/digest.sh@56 -- # rw=randwrite 00:32:37.934 21:35:27 -- host/digest.sh@56 -- # bs=131072 00:32:37.934 21:35:27 -- host/digest.sh@56 -- # qd=16 00:32:37.934 21:35:27 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:37.934 21:35:27 -- host/digest.sh@58 -- # bperfpid=105080 00:32:37.934 21:35:27 -- host/digest.sh@60 -- # waitforlisten 105080 /var/tmp/bperf.sock 00:32:37.934 21:35:27 -- common/autotest_common.sh@817 -- # '[' -z 105080 ']' 00:32:37.934 21:35:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:37.934 21:35:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:37.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:37.934 21:35:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:37.934 21:35:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:37.934 21:35:27 -- common/autotest_common.sh@10 -- # set +x 00:32:37.934 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:37.934 Zero copy mechanism will not be used. 00:32:37.934 [2024-04-26 21:35:27.120405] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:37.934 [2024-04-26 21:35:27.120477] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105080 ] 00:32:38.194 [2024-04-26 21:35:27.260763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.194 [2024-04-26 21:35:27.312701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.133 21:35:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:39.133 21:35:28 -- common/autotest_common.sh@850 -- # return 0 00:32:39.133 21:35:28 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:39.133 21:35:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:39.133 21:35:28 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:39.133 21:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:39.133 21:35:28 -- common/autotest_common.sh@10 -- # set +x 00:32:39.133 21:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:39.133 21:35:28 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:39.133 21:35:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:39.393 nvme0n1 00:32:39.393 21:35:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:39.393 21:35:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:39.393 21:35:28 -- common/autotest_common.sh@10 -- # set +x 00:32:39.653 21:35:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:39.653 21:35:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:39.653 21:35:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:39.653 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:39.653 Zero copy mechanism will not be used. 00:32:39.653 Running I/O for 2 seconds... 00:32:39.653 [2024-04-26 21:35:28.762119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.762614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.762657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.766869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.767356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.767394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.771427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.771921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.771955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.775679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.776143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.776177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.779873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.780302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.780342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.783954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.784453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.784482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.788278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.788749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.788779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.792364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.792815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.792848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.796455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.796910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.796940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.800607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.801082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.801112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.804652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.805105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.805137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.808702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.809149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.809180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.812754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.813202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.813233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.816767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.817215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.817245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.820991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.821463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.821493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.825008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.825488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.825520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.829104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.829559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.829591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.833232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.833681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.833712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.837382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.837845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.837878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.841422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.841915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.841946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.845580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.846093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.846125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.849869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.850370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.850402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.853993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.653 [2024-04-26 21:35:28.854469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.653 [2024-04-26 21:35:28.854499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.653 [2024-04-26 21:35:28.858009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.858471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.858499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.861997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.862458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.862487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.865933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.866403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.866434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.869942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.870416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.870446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.873912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.874366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.874394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.877861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.878299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.878338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.881922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.882374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.882404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.885810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.886246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.886277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.889800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.890261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.890311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.893750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.894215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.894257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.897792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.898235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.898269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.654 [2024-04-26 21:35:28.901902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.654 [2024-04-26 21:35:28.902382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.654 [2024-04-26 21:35:28.902431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.906155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.906661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.906699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.910424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.910898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.910932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.914527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.915005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.915037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.918610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.919043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.919075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.922650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.923122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.923153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.926705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.927140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.927172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.930802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.931269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.931304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.934986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.935468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.935501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.939250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.939740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.939772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.943672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.944137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.944167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.947936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.948416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.915 [2024-04-26 21:35:28.948445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.915 [2024-04-26 21:35:28.951991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.915 [2024-04-26 21:35:28.952442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.952486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.956267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.956749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.956781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.960508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.960975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.961009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.964761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.965176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.965207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.968819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.969227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.969270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.972863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.973287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.973345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.976830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.977262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.977305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.980735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.981164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.981205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.984617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.985047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.985089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.988550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.988979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.989023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.992471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.992869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.992900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:28.996331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:28.996800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:28.996829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.000306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.000779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.000809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.004258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.004749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.004782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.008516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.008955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.008996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.012452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.012908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.012939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.017108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.017594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.017626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.021443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.021923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.021956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.025912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.026383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.026415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.030125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.030568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.030601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.034468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.034970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.035003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.038835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.039350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.039393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.043263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.043708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.043739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.047367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.047806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.047837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.051532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.916 [2024-04-26 21:35:29.051982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.916 [2024-04-26 21:35:29.052011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.916 [2024-04-26 21:35:29.055673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.056103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.056134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.059840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.060319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.060380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.063920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.064382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.064424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.067959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.068395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.068425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.071921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.072371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.072404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.076025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.076460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.076488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.080048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.080461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.080491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.084075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.084532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.084574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.088216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.088748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.088776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.092474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.092928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.092960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.096698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.097109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.097152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.100745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.101179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.101224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.104947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.105433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.105466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.109153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.109623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.109668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.113278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.113740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.113795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.117361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.117836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.117869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.121420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.121840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.121884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.125267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.125727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.125763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.129171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.129620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.129661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.133197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.133629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.133672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.137114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.137543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.137573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.141197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.917 [2024-04-26 21:35:29.141642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.917 [2024-04-26 21:35:29.141685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.917 [2024-04-26 21:35:29.145092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.918 [2024-04-26 21:35:29.145527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.918 [2024-04-26 21:35:29.145556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.918 [2024-04-26 21:35:29.149004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.918 [2024-04-26 21:35:29.149430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.918 [2024-04-26 21:35:29.149471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:39.918 [2024-04-26 21:35:29.152849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.918 [2024-04-26 21:35:29.153263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.918 [2024-04-26 21:35:29.153304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.918 [2024-04-26 21:35:29.156735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.918 [2024-04-26 21:35:29.157168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.918 [2024-04-26 21:35:29.157208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:39.918 [2024-04-26 21:35:29.160796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:39.918 [2024-04-26 21:35:29.161272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.918 [2024-04-26 21:35:29.161305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:39.918 [2024-04-26 21:35:29.165095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.165517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.165557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.169139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.169596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.169640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.173303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.173761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.173810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.177566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.178039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.178084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.181627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.182074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.182120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.185616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.186061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.186104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.189544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.189949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.190024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.193515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.193954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.193985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.197524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.198002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.198043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.201640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.202126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.202160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.205974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.206561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.206608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.211547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.212012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.212064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.215861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.216326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.216381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.220159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.220653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.220690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.224716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.225201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.225237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.229252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.229764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.229816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.233601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.234104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.234140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.238069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.238550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.208 [2024-04-26 21:35:29.238584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.208 [2024-04-26 21:35:29.242249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.208 [2024-04-26 21:35:29.242736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.242774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.246700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.247161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.247197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.251157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.251635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.251684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.255578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.256030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.256065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.259926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.260402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.260438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.264259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.264792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.264835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.268590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.269145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.269187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.272874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.273292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.273343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.276721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.277077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.277108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.280477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.280855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.280889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.284298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.284682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.284715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.287995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.288442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.288487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.291948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.292316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.292364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.295699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.296060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.296102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.299502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.299878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.299931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.303522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.303913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.303956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.307437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.307779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.307825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.311465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.311720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.311751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.315008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.315405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.315446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.318374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.318509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.318537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.321638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.321707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.321730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.325088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.325177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.325196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.328355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.328540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.328562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.331664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.331821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.331840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.334990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.335146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.335166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.338426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.338610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.338642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.341766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.341939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.209 [2024-04-26 21:35:29.341958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.209 [2024-04-26 21:35:29.345059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.209 [2024-04-26 21:35:29.345152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.345174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.348470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.348570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.348588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.351757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.352000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.352022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.355159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.355259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.355277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.358608] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.358744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.358762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.361773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.361887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.361905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.364980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.365068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.365087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.368263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.368406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.368424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.371572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.371709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.371726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.374841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.374952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.374971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.378081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.378167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.378185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.381269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.381360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.381377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.384442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.384615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.384634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.387672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.387779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.387796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.391043] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.391143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.391159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.394266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.394391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.394409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.397480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.397582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.397599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.400672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.400797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.400813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.403946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.404085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.404102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.407303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.407452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.407470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.410692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.410801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.410818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.414131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.414271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.414289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.417522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.417652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.417670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.420840] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.420989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.421006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.424127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.424251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.424268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.427529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.427646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.427664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.430890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.431011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.431029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.210 [2024-04-26 21:35:29.434337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.210 [2024-04-26 21:35:29.434452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.210 [2024-04-26 21:35:29.434470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.211 [2024-04-26 21:35:29.437634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.211 [2024-04-26 21:35:29.437765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.211 [2024-04-26 21:35:29.437783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.211 [2024-04-26 21:35:29.440891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.211 [2024-04-26 21:35:29.441038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.211 [2024-04-26 21:35:29.441055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.211 [2024-04-26 21:35:29.444237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.211 [2024-04-26 21:35:29.444355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.211 [2024-04-26 21:35:29.444372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.211 [2024-04-26 21:35:29.447591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.211 [2024-04-26 21:35:29.447693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.211 [2024-04-26 21:35:29.447710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.211 [2024-04-26 21:35:29.450953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.211 [2024-04-26 21:35:29.451068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.211 [2024-04-26 21:35:29.451086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.211 [2024-04-26 21:35:29.454352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.211 [2024-04-26 21:35:29.454455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.211 [2024-04-26 21:35:29.454473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.211 [2024-04-26 21:35:29.457715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.211 [2024-04-26 21:35:29.457848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.211 [2024-04-26 21:35:29.457866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.495 [2024-04-26 21:35:29.461072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.495 [2024-04-26 21:35:29.461173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.495 [2024-04-26 21:35:29.461190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.495 [2024-04-26 21:35:29.464253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.495 [2024-04-26 21:35:29.464396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.495 [2024-04-26 21:35:29.464414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.495 [2024-04-26 21:35:29.467712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.495 [2024-04-26 21:35:29.467826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.495 [2024-04-26 21:35:29.467843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.495 [2024-04-26 21:35:29.471241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.495 [2024-04-26 21:35:29.471370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.471388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.474837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.474961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.474986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.478440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.478556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.478574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.482104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.482220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.482245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.485724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.485864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.485883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.489319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.489442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.489461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.492907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.493024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.493043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.496420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.496548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.496566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.499829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.499945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.499962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.503385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.503496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.503518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.506796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.506910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.506928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.510222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.510396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.510414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.513583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.513697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.513715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.516987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.517104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.517122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.520365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.520485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.520503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.523724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.523853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.523869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.527068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.527190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.527216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.530414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.530553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.530577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.533595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.533724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.533780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.536837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.536964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.536993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.540155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.540264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.540314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.543601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.543727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.543755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.547008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.547114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.547142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.550455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.550593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.550621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.553875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.554011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.496 [2024-04-26 21:35:29.554040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.496 [2024-04-26 21:35:29.557425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.496 [2024-04-26 21:35:29.557548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.557575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.560696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.560829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.560858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.564150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.564296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.564338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.567557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.567693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.567723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.570973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.571092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.571121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.574414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.574530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.574559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.577649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.577794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.577823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.580930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.581095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.581124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.584325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.584464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.584491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.587700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.587823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.587851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.591099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.591258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.591287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.594584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.594699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.594728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.597798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.597914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.597941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.601183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.601384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.601406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.604505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.604632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.604659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.607886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.608003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.608031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.611468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.611695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.611746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.615165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.615443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.615482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.619131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.619329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.619373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.622760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.622912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.622942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.626316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.626490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.626519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.629924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.630086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.630115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.633538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.633684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.633712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.637127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.637271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.637303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.640738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.640883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.640911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.644211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.644381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.644406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.647717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.647899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.647939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.497 [2024-04-26 21:35:29.651252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.497 [2024-04-26 21:35:29.651415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.497 [2024-04-26 21:35:29.651441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.654657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.654819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.654846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.658024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.658165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.658192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.661467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.661619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.661645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.664795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.664950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.664978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.668179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.668367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.668387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.671595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.671720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.671737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.675070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.675225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.675252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.678615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.678773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.678800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.682132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.682262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.682289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.685477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.685672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.685699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.688994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.689142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.689170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.692306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.692505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.692531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.695714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.695890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.695917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.699101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.699253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.699278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.702417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.702562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.702588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.705661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.705824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.705849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.708983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.709112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.709135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.712265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.712440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.712468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.715626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.715789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.715817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.719069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.719214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.719242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.722588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.722727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.722754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.726194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.726351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.726379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.729610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.729780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.729806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.733030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.733187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.498 [2024-04-26 21:35:29.733214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.498 [2024-04-26 21:35:29.736624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.498 [2024-04-26 21:35:29.736800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.499 [2024-04-26 21:35:29.736845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.499 [2024-04-26 21:35:29.740202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.499 [2024-04-26 21:35:29.740377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.499 [2024-04-26 21:35:29.740402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.760 [2024-04-26 21:35:29.744152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.760 [2024-04-26 21:35:29.744384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.760 [2024-04-26 21:35:29.744454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.747794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.747953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.747984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.751427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.751564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.751589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.755152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.755341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.755366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.758527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.758689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.758718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.762020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.762180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.762211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.765449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.765598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.765627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.769041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.769195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.769215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.772524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.772643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.772662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.776439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.776577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.776596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.779981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.780116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.780135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.783694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.783826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.783846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.787564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.787687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.787706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.791140] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.791286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.791307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.794660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.794805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.794831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.798240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.798390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.798410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.801822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.801947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.801969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.805536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.805680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.805701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.809079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.809193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.809214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.812744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.812897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.812941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.816414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.816550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.816571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.819903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.820062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.820094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.823523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.823655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.823675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.827139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.827262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.827281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.830760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.830877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.761 [2024-04-26 21:35:29.830903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.761 [2024-04-26 21:35:29.834201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.761 [2024-04-26 21:35:29.834331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.834362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.837653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.837779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.837797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.841089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.841212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.841238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.844578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.844720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.844739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.848079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.848207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.848238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.851522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.851658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.851685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.854866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.854979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.854997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.858121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.858240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.858257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.861278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.861396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.861413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.864477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.864624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.864640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.867867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.867979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.867997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.871200] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.871304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.871320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.874490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.874629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.874645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.877716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.877838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.877854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.881085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.881206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.881224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.884571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.884710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.884728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.888067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.888187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.888205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.891449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.891614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.891631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.894575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.894686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.894702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.897748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.897896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.897915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.900957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.901067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.901084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.904111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.904252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.904270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.907517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.907658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.907678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.910753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.910910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.910929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.914027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.914150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.914170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.917189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.917305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.917343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.920570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.920688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.920707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.762 [2024-04-26 21:35:29.923869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.762 [2024-04-26 21:35:29.923990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.762 [2024-04-26 21:35:29.924015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.927114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.927241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.927259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.930301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.930425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.930444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.933450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.933551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.933569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.936656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.936807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.936840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.939958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.940073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.940090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.943387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.943505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.943522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.946761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.946886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.946904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.950173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.950303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.950320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.953681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.953805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.953824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.957125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.957259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.957276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.960648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.960771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.960789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.964174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.964303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.964321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.967653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.967785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.967803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.971117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.971232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.971250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.974448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.974589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.974623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.977908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.978029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.978046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.981514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.981649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.981679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.984900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.985124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.985155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.988463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.988580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.988600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.992014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.992151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.992168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.995518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.995637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.995654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:29.998947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:29.999055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:29.999073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:30.002174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:30.002305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:30.002321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:30.005401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:30.005533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:30.005551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:40.763 [2024-04-26 21:35:30.008938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:40.763 [2024-04-26 21:35:30.009060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.763 [2024-04-26 21:35:30.009079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.012390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.012551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.012569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.015787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.015900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.015916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.019204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.019321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.019350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.022512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.022638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.022655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.025717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.025852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.025870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.029127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.029236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.029254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.032509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.032684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.032703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.036053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.036169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.036187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.039953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.040094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.040112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.043725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.043850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.043867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.047387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.047535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.047553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.051075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.051185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.051202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.054429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.054557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.054574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.057710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.057843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.057860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.061146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.061282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.061300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.064673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.064790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.064808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.068211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.068352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.068384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.071870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.071990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.072010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.075527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.075674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.075697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.079082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.079230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.025 [2024-04-26 21:35:30.079249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.025 [2024-04-26 21:35:30.082638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.025 [2024-04-26 21:35:30.082764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.082782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.085978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.086123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.086142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.089215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.089365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.089385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.092518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.092682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.092700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.095804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.095918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.095936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.099094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.099219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.099234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.102278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.102396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.102413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.105432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.105557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.105575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.108646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.108814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.108832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.111940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.112054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.112073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.115259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.115382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.115401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.118604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.118711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.118730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.121794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.121907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.121925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.125019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.125154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.125172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.128181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.128295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.128313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.131607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.131744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.131764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.135013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.135148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.135168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.138373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.138517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.138539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.141900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.142032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.142054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.145602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.145733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.145768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.149239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.149393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.149417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.152716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.152842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.152862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.156088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.156219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.156239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.159633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.159772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.159791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.163031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.163151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.163169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.166380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.166501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.166520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.169676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.169823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.169841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.026 [2024-04-26 21:35:30.173033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.026 [2024-04-26 21:35:30.173164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.026 [2024-04-26 21:35:30.173183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.176466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.176589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.176608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.180037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.180172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.180192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.183521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.183672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.183692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.186970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.187092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.187111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.190404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.190516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.190537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.193700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.193861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.193883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.197046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.197217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.197237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.200456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.200585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.200604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.203904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.204034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.204054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.207281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.207441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.207461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.210751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.210877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.210897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.214161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.214309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.214329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.217794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.217917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.217937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.221229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.221415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.221434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.224708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.224834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.224853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.228571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.228695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.228715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.232240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.232411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.232431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.235856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.235986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.236008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.239472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.239605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.239626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.243033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.243158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.243180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.246614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.246837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.246874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.250196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.250322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.250356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.253903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.254034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.254055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.257428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.257602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.257632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.261047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.261218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.261238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.264606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.264745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.264764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.268180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.027 [2024-04-26 21:35:30.268347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.027 [2024-04-26 21:35:30.268367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.027 [2024-04-26 21:35:30.271762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.028 [2024-04-26 21:35:30.271897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.028 [2024-04-26 21:35:30.271916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.275428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.275547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.275567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.279492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.279623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.279643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.283172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.283312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.283334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.287213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.287327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.287377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.291717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.291830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.291849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.295437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.295554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.295575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.299432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.299544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.299566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.303161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.303285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.303307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.307425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.307556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.307579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.311842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.311975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.312000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.315844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.316007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.316032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.319921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.320063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.320090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.323896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.324012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.324037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.328712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.290 [2024-04-26 21:35:30.328845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.290 [2024-04-26 21:35:30.328870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.290 [2024-04-26 21:35:30.333207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.333325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.333349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.337028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.337187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.337209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.340724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.340870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.340893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.344227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.344385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.344406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.347818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.347951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.347972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.351490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.351693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.351735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.355174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.355367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.355386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.359037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.359135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.359154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.362765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.362869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.362889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.366362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.366478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.366497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.369878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.369977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.369997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.373279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.373440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.373460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.376705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.376807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.376826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.380235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.380402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.380421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.383898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.384072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.384093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.387511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.387645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.387665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.390916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.390992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.391011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.394482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.394603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.394623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.398037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.398131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.398151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.401634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.401781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.401802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.405205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.405356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.405375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.408779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.409001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.409019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.412185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.412423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.412442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.415710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.415841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.415866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.419310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.419450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.419468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.422810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.422945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.422963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.426265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.291 [2024-04-26 21:35:30.426407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.291 [2024-04-26 21:35:30.426427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.291 [2024-04-26 21:35:30.429566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.429711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.429729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.433249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.433340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.433373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.436656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.436806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.436827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.440324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.440420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.440440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.443892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.444030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.444051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.447523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.447605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.447626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.451052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.451189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.451209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.454459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.454564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.454584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.457714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.457939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.457958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.461087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.461217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.461236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.464612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.464692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.464711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.468033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.468155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.468174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.471425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.471579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.471596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.474976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.475096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.475116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.479021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.479302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.479347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.482805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.482912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.482932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.487149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.487237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.487256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.491130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.491239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.491260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.495075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.495177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.495197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.498579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.498734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.498764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.502184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.502323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.502356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.505527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.505658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.505677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.509258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.509354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.509374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.512804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.512919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.512937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.516195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.516304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.516322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.519803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.519896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.519914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.523483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.292 [2024-04-26 21:35:30.523586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.292 [2024-04-26 21:35:30.523604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.292 [2024-04-26 21:35:30.526916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.293 [2024-04-26 21:35:30.527032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.293 [2024-04-26 21:35:30.527050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.293 [2024-04-26 21:35:30.530506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.293 [2024-04-26 21:35:30.530624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.293 [2024-04-26 21:35:30.530644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.293 [2024-04-26 21:35:30.535132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.293 [2024-04-26 21:35:30.535216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.293 [2024-04-26 21:35:30.535236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.293 [2024-04-26 21:35:30.539733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.293 [2024-04-26 21:35:30.539802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.293 [2024-04-26 21:35:30.539821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.543425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.543514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.554 [2024-04-26 21:35:30.543532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.547456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.547610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.554 [2024-04-26 21:35:30.547630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.552155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.552232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.554 [2024-04-26 21:35:30.552252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.556343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.556441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.554 [2024-04-26 21:35:30.556460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.559978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.560086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.554 [2024-04-26 21:35:30.560105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.563542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.563647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.554 [2024-04-26 21:35:30.563664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.567145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.567290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.554 [2024-04-26 21:35:30.567308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.570703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.570800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.554 [2024-04-26 21:35:30.570819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.574142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.574240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.554 [2024-04-26 21:35:30.574261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.554 [2024-04-26 21:35:30.577454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.554 [2024-04-26 21:35:30.577543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.577562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.580838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.580930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.580949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.584255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.584516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.584534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.587827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.587950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.587969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.591489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.591610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.591629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.595079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.595194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.595211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.598543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.598638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.598657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.601866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.601981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.602000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.605272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.605410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.605430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.610094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.610219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.610239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.613663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.613774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.613811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.617051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.617183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.617201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.620469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.620578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.620595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.623919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.624022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.624039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.627464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.627562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.627578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.630983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.631088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.631105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.634423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.634514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.634534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.637784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.637892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.637912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.641127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.641218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.641236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.644585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.644690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.644708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.648129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.648224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.648244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.651726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.651822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.555 [2024-04-26 21:35:30.655252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.555 [2024-04-26 21:35:30.655346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.555 [2024-04-26 21:35:30.655368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.658679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.658765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.658784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.661866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.661971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.661991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.665272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.665418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.665438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.668745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.668897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.668915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.672190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.672273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.672290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.675550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.675632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.675649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.678993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.679108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.679125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.682247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.682376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.682395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.685497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.685602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.685618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.688686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.688766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.688782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.691889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.692009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.692025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.695236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.695323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.695351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.698524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.698662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.698687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.701725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.701850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.701868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.705044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.705125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.705141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.708254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.708392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.708408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.711536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.711617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.711634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.714782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.714885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.714901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.718027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.718098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.718117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.721306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.721386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.721407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.724599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.724719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.556 [2024-04-26 21:35:30.724737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.556 [2024-04-26 21:35:30.727858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.556 [2024-04-26 21:35:30.727948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.557 [2024-04-26 21:35:30.727966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.557 [2024-04-26 21:35:30.731030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.557 [2024-04-26 21:35:30.731117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.557 [2024-04-26 21:35:30.731136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.557 [2024-04-26 21:35:30.734254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.557 [2024-04-26 21:35:30.734335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.557 [2024-04-26 21:35:30.734365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.557 [2024-04-26 21:35:30.737481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.557 [2024-04-26 21:35:30.737563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.557 [2024-04-26 21:35:30.737581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.557 [2024-04-26 21:35:30.740714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.557 [2024-04-26 21:35:30.740817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.557 [2024-04-26 21:35:30.740835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:41.557 [2024-04-26 21:35:30.743945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.557 [2024-04-26 21:35:30.744057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.557 [2024-04-26 21:35:30.744073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:41.557 [2024-04-26 21:35:30.747156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.557 [2024-04-26 21:35:30.747259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.557 [2024-04-26 21:35:30.747275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:41.557 [2024-04-26 21:35:30.750426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a62360) with pdu=0x2000190fef90 00:32:41.557 [2024-04-26 21:35:30.750481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:41.557 [2024-04-26 21:35:30.750499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:41.557 00:32:41.557 Latency(us) 00:32:41.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.557 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:41.557 nvme0n1 : 2.00 8477.69 1059.71 0.00 0.00 1883.50 1302.13 6353.27 00:32:41.557 =================================================================================================================== 00:32:41.557 Total : 8477.69 1059.71 0.00 0.00 1883.50 1302.13 6353.27 00:32:41.557 0 00:32:41.557 21:35:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:41.557 21:35:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:41.557 21:35:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:41.557 | .driver_specific 00:32:41.557 | .nvme_error 00:32:41.557 | .status_code 00:32:41.557 | .command_transient_transport_error' 00:32:41.557 21:35:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:41.817 21:35:31 -- host/digest.sh@71 -- # (( 547 > 0 )) 00:32:41.817 21:35:31 -- host/digest.sh@73 -- # killprocess 105080 00:32:41.817 21:35:31 -- common/autotest_common.sh@936 -- # '[' -z 105080 ']' 00:32:41.817 21:35:31 -- common/autotest_common.sh@940 -- # kill -0 105080 00:32:41.817 21:35:31 -- common/autotest_common.sh@941 -- # uname 00:32:41.817 21:35:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:41.817 21:35:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105080 00:32:41.817 21:35:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:41.817 21:35:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:41.817 killing process with pid 105080 00:32:41.817 21:35:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105080' 00:32:41.817 21:35:31 -- common/autotest_common.sh@955 -- # kill 105080 00:32:41.817 Received shutdown signal, test time was about 2.000000 seconds 00:32:41.817 00:32:41.817 Latency(us) 00:32:41.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.817 =================================================================================================================== 00:32:41.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:41.817 21:35:31 -- common/autotest_common.sh@960 -- # wait 105080 00:32:42.077 21:35:31 -- host/digest.sh@116 -- # killprocess 104769 00:32:42.077 21:35:31 -- common/autotest_common.sh@936 -- # '[' -z 104769 ']' 00:32:42.077 21:35:31 -- common/autotest_common.sh@940 -- # kill -0 104769 00:32:42.077 21:35:31 -- common/autotest_common.sh@941 -- # uname 00:32:42.077 21:35:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:42.077 21:35:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 104769 00:32:42.077 21:35:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:42.077 21:35:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:42.077 killing process with pid 104769 00:32:42.077 21:35:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 104769' 00:32:42.077 21:35:31 -- common/autotest_common.sh@955 -- # kill 104769 00:32:42.077 21:35:31 -- common/autotest_common.sh@960 -- # wait 104769 00:32:42.336 00:32:42.336 real 0m17.934s 00:32:42.336 user 0m34.127s 00:32:42.336 sys 0m4.450s 00:32:42.336 21:35:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:42.336 21:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.336 ************************************ 00:32:42.336 END TEST nvmf_digest_error 00:32:42.336 ************************************ 00:32:42.336 21:35:31 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:42.336 21:35:31 -- host/digest.sh@150 -- # nvmftestfini 00:32:42.336 21:35:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:42.336 21:35:31 -- nvmf/common.sh@117 -- # sync 00:32:42.596 21:35:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:42.596 21:35:31 -- nvmf/common.sh@120 -- # set +e 00:32:42.596 21:35:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:42.596 21:35:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:42.596 rmmod nvme_tcp 00:32:42.596 rmmod nvme_fabrics 00:32:42.596 rmmod nvme_keyring 00:32:42.596 21:35:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:42.596 21:35:31 -- nvmf/common.sh@124 -- # set -e 00:32:42.596 21:35:31 -- nvmf/common.sh@125 -- # return 0 00:32:42.596 21:35:31 -- nvmf/common.sh@478 -- # '[' -n 104769 ']' 00:32:42.596 21:35:31 -- nvmf/common.sh@479 -- # killprocess 104769 00:32:42.596 21:35:31 -- common/autotest_common.sh@936 -- # '[' -z 104769 ']' 00:32:42.596 21:35:31 -- common/autotest_common.sh@940 -- # kill -0 104769 00:32:42.596 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (104769) - No such process 00:32:42.596 Process with pid 104769 is not found 00:32:42.596 21:35:31 -- common/autotest_common.sh@963 -- # echo 'Process with pid 104769 is not found' 00:32:42.596 21:35:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:32:42.596 21:35:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:42.596 21:35:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:42.596 21:35:31 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:42.597 21:35:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:42.597 21:35:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.597 21:35:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.597 21:35:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.597 21:35:31 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:42.597 00:32:42.597 real 0m36.563s 00:32:42.597 user 1m8.262s 00:32:42.597 sys 0m9.224s 00:32:42.597 21:35:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:42.597 21:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.597 ************************************ 00:32:42.597 END TEST nvmf_digest 00:32:42.597 ************************************ 00:32:42.597 21:35:31 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:32:42.597 21:35:31 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:32:42.597 21:35:31 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:32:42.597 21:35:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:32:42.597 21:35:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:42.597 21:35:31 -- common/autotest_common.sh@10 -- # set +x 00:32:42.857 ************************************ 00:32:42.857 START TEST nvmf_mdns_discovery 00:32:42.857 ************************************ 00:32:42.857 21:35:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:32:42.857 * Looking for test storage... 00:32:42.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:42.857 21:35:31 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:42.857 21:35:31 -- nvmf/common.sh@7 -- # uname -s 00:32:42.857 21:35:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.857 21:35:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.857 21:35:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.857 21:35:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.857 21:35:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.857 21:35:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.857 21:35:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.857 21:35:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.857 21:35:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.857 21:35:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.857 21:35:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:32:42.857 21:35:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:32:42.857 21:35:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.857 21:35:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.857 21:35:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:42.857 21:35:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.857 21:35:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:42.857 21:35:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.857 21:35:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.857 21:35:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.857 21:35:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.857 21:35:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.857 21:35:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.857 21:35:32 -- paths/export.sh@5 -- # export PATH 00:32:42.857 21:35:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.857 21:35:32 -- nvmf/common.sh@47 -- # : 0 00:32:42.857 21:35:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:42.857 21:35:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:42.857 21:35:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.857 21:35:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.857 21:35:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.857 21:35:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:42.857 21:35:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:42.857 21:35:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:42.857 21:35:32 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:32:42.857 21:35:32 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:32:42.857 21:35:32 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:42.857 21:35:32 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:42.857 21:35:32 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:32:42.857 21:35:32 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:42.857 21:35:32 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:32:42.857 21:35:32 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:32:42.857 21:35:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:42.857 21:35:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:42.857 21:35:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:42.857 21:35:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:42.857 21:35:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:42.857 21:35:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.857 21:35:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:42.857 21:35:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:42.857 21:35:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:32:42.857 21:35:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:32:42.857 21:35:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:32:42.857 21:35:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:32:42.858 21:35:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:32:42.858 21:35:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:32:42.858 21:35:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.858 21:35:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.858 21:35:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:42.858 21:35:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:42.858 21:35:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:42.858 21:35:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:42.858 21:35:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:42.858 21:35:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.858 21:35:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:42.858 21:35:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:42.858 21:35:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:42.858 21:35:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:42.858 21:35:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:42.858 21:35:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:42.858 Cannot find device "nvmf_tgt_br" 00:32:42.858 21:35:32 -- nvmf/common.sh@155 -- # true 00:32:42.858 21:35:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:42.858 Cannot find device "nvmf_tgt_br2" 00:32:42.858 21:35:32 -- nvmf/common.sh@156 -- # true 00:32:42.858 21:35:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:42.858 21:35:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:43.117 Cannot find device "nvmf_tgt_br" 00:32:43.117 21:35:32 -- nvmf/common.sh@158 -- # true 00:32:43.117 21:35:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:43.117 Cannot find device "nvmf_tgt_br2" 00:32:43.117 21:35:32 -- nvmf/common.sh@159 -- # true 00:32:43.117 21:35:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:43.117 21:35:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:43.117 21:35:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:43.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:43.117 21:35:32 -- nvmf/common.sh@162 -- # true 00:32:43.117 21:35:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:43.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:43.117 21:35:32 -- nvmf/common.sh@163 -- # true 00:32:43.117 21:35:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:43.117 21:35:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:43.117 21:35:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:43.117 21:35:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:43.117 21:35:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:43.117 21:35:32 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:43.117 21:35:32 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:43.117 21:35:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:43.117 21:35:32 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:43.117 21:35:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:43.117 21:35:32 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:43.117 21:35:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:43.117 21:35:32 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:43.117 21:35:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:43.117 21:35:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:43.117 21:35:32 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:43.117 21:35:32 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:43.117 21:35:32 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:43.117 21:35:32 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:43.117 21:35:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:43.117 21:35:32 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:43.117 21:35:32 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:43.117 21:35:32 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:43.376 21:35:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:43.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:32:43.376 00:32:43.376 --- 10.0.0.2 ping statistics --- 00:32:43.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.376 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:32:43.376 21:35:32 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:43.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:43.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:32:43.376 00:32:43.376 --- 10.0.0.3 ping statistics --- 00:32:43.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.376 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:43.376 21:35:32 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:43.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:32:43.376 00:32:43.376 --- 10.0.0.1 ping statistics --- 00:32:43.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.376 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:32:43.376 21:35:32 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.376 21:35:32 -- nvmf/common.sh@422 -- # return 0 00:32:43.376 21:35:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:32:43.376 21:35:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.376 21:35:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:43.376 21:35:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:43.376 21:35:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.376 21:35:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:43.376 21:35:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:43.376 21:35:32 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:32:43.376 21:35:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:43.376 21:35:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:43.376 21:35:32 -- common/autotest_common.sh@10 -- # set +x 00:32:43.376 21:35:32 -- nvmf/common.sh@470 -- # nvmfpid=105376 00:32:43.376 21:35:32 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:32:43.376 21:35:32 -- nvmf/common.sh@471 -- # waitforlisten 105376 00:32:43.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.376 21:35:32 -- common/autotest_common.sh@817 -- # '[' -z 105376 ']' 00:32:43.376 21:35:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.376 21:35:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:43.376 21:35:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.376 21:35:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:43.376 21:35:32 -- common/autotest_common.sh@10 -- # set +x 00:32:43.376 [2024-04-26 21:35:32.482708] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:43.376 [2024-04-26 21:35:32.482785] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.376 [2024-04-26 21:35:32.612313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.633 [2024-04-26 21:35:32.673511] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.633 [2024-04-26 21:35:32.673571] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.633 [2024-04-26 21:35:32.673581] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.633 [2024-04-26 21:35:32.673587] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.633 [2024-04-26 21:35:32.673593] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.633 [2024-04-26 21:35:32.673621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.198 21:35:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:44.198 21:35:33 -- common/autotest_common.sh@850 -- # return 0 00:32:44.198 21:35:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:44.198 21:35:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:44.198 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.198 21:35:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:44.198 21:35:33 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:32:44.198 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.198 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.198 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.198 21:35:33 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:32:44.198 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.198 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.456 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:44.456 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.456 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.456 [2024-04-26 21:35:33.523415] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:44.456 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:44.456 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.456 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.456 [2024-04-26 21:35:33.531454] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:44.456 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:44.456 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.456 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.456 null0 00:32:44.456 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:44.456 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.456 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.456 null1 00:32:44.456 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:32:44.456 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.456 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.456 null2 00:32:44.456 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:32:44.456 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.456 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.456 null3 00:32:44.456 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:32:44.456 21:35:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.456 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.456 21:35:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@47 -- # hostpid=105426 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:44.456 21:35:33 -- host/mdns_discovery.sh@48 -- # waitforlisten 105426 /tmp/host.sock 00:32:44.456 21:35:33 -- common/autotest_common.sh@817 -- # '[' -z 105426 ']' 00:32:44.456 21:35:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:32:44.456 21:35:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:44.456 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:44.456 21:35:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:44.456 21:35:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:44.456 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:32:44.456 [2024-04-26 21:35:33.639160] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:32:44.456 [2024-04-26 21:35:33.639323] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105426 ] 00:32:44.716 [2024-04-26 21:35:33.778980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.716 [2024-04-26 21:35:33.832366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.282 21:35:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:45.282 21:35:34 -- common/autotest_common.sh@850 -- # return 0 00:32:45.539 21:35:34 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:32:45.539 21:35:34 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:32:45.539 21:35:34 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:32:45.539 21:35:34 -- host/mdns_discovery.sh@57 -- # avahipid=105455 00:32:45.539 21:35:34 -- host/mdns_discovery.sh@58 -- # sleep 1 00:32:45.540 21:35:34 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:32:45.540 21:35:34 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:32:45.540 Process 1013 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:32:45.540 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:32:45.540 Successfully dropped root privileges. 00:32:45.540 avahi-daemon 0.8 starting up. 00:32:45.540 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:32:45.540 Successfully called chroot(). 00:32:45.540 Successfully dropped remaining capabilities. 00:32:45.540 No service file found in /etc/avahi/services. 00:32:45.540 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:32:45.540 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:32:45.540 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:32:45.540 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:32:45.540 Network interface enumeration completed. 00:32:45.540 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:32:45.540 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:32:45.540 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:32:45.540 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:32:46.478 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 3469949225. 00:32:46.478 21:35:35 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:46.478 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.478 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.478 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.478 21:35:35 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:32:46.478 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.478 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.478 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.478 21:35:35 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:32:46.478 21:35:35 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:32:46.478 21:35:35 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:46.478 21:35:35 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:32:46.478 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.478 21:35:35 -- host/mdns_discovery.sh@68 -- # sort 00:32:46.478 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.478 21:35:35 -- host/mdns_discovery.sh@68 -- # xargs 00:32:46.478 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # xargs 00:32:46.736 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # sort 00:32:46.736 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.736 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:46.736 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.736 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.736 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@68 -- # sort 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:32:46.736 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.736 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@68 -- # xargs 00:32:46.736 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:32:46.736 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.736 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # xargs 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # sort 00:32:46.736 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:46.736 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.736 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.736 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@68 -- # sort 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@68 -- # xargs 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:32:46.736 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.736 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.736 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:46.736 21:35:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.736 21:35:35 -- common/autotest_common.sh@10 -- # set +x 00:32:46.736 21:35:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:32:46.737 21:35:35 -- host/mdns_discovery.sh@64 -- # sort 00:32:46.737 21:35:35 -- host/mdns_discovery.sh@64 -- # xargs 00:32:46.737 21:35:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.737 [2024-04-26 21:35:35.973596] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:46.996 21:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.996 21:35:36 -- common/autotest_common.sh@10 -- # set +x 00:32:46.996 [2024-04-26 21:35:36.019349] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.996 21:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:46.996 21:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.996 21:35:36 -- common/autotest_common.sh@10 -- # set +x 00:32:46.996 21:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:32:46.996 21:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.996 21:35:36 -- common/autotest_common.sh@10 -- # set +x 00:32:46.996 21:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:32:46.996 21:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.996 21:35:36 -- common/autotest_common.sh@10 -- # set +x 00:32:46.996 21:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:32:46.996 21:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.996 21:35:36 -- common/autotest_common.sh@10 -- # set +x 00:32:46.996 21:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:32:46.996 21:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.996 21:35:36 -- common/autotest_common.sh@10 -- # set +x 00:32:46.996 [2024-04-26 21:35:36.079238] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:32:46.996 21:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:32:46.996 21:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:46.996 21:35:36 -- common/autotest_common.sh@10 -- # set +x 00:32:46.996 [2024-04-26 21:35:36.091154] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:46.996 21:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=105507 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@125 -- # sleep 5 00:32:46.996 21:35:36 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:32:47.972 [2024-04-26 21:35:36.871879] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:32:47.972 Established under name 'CDC' 00:32:48.229 [2024-04-26 21:35:37.271129] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:32:48.229 [2024-04-26 21:35:37.271269] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:32:48.229 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:32:48.229 cookie is 0 00:32:48.229 is_local: 1 00:32:48.229 our_own: 0 00:32:48.229 wide_area: 0 00:32:48.229 multicast: 1 00:32:48.229 cached: 1 00:32:48.229 [2024-04-26 21:35:37.370930] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:32:48.229 [2024-04-26 21:35:37.371065] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:32:48.229 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:32:48.229 cookie is 0 00:32:48.229 is_local: 1 00:32:48.229 our_own: 0 00:32:48.229 wide_area: 0 00:32:48.229 multicast: 1 00:32:48.229 cached: 1 00:32:49.164 [2024-04-26 21:35:38.277496] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:32:49.164 [2024-04-26 21:35:38.277529] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:32:49.164 [2024-04-26 21:35:38.277544] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:49.164 [2024-04-26 21:35:38.363476] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:32:49.164 [2024-04-26 21:35:38.376980] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:49.164 [2024-04-26 21:35:38.377013] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:49.164 [2024-04-26 21:35:38.377027] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:49.423 [2024-04-26 21:35:38.424580] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:32:49.424 [2024-04-26 21:35:38.424720] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:32:49.424 [2024-04-26 21:35:38.462852] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:32:49.424 [2024-04-26 21:35:38.517927] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:32:49.424 [2024-04-26 21:35:38.518072] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@80 -- # xargs 00:32:51.957 21:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:32:51.957 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@80 -- # sort 00:32:51.957 21:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:51.957 21:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:51.957 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@76 -- # sort 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@76 -- # xargs 00:32:51.957 21:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@68 -- # sort 00:32:51.957 21:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:51.957 21:35:41 -- host/mdns_discovery.sh@68 -- # xargs 00:32:51.957 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:52.215 21:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@64 -- # sort 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:32:52.215 21:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:52.215 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@64 -- # xargs 00:32:52.215 21:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:32:52.215 21:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:52.215 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@72 -- # sort -n 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@72 -- # xargs 00:32:52.215 21:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@72 -- # sort -n 00:32:52.215 21:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@72 -- # xargs 00:32:52.215 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:52.215 21:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:52.215 21:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:52.215 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:32:52.215 21:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:52.215 21:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:52.215 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:52.215 21:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:32:52.215 21:35:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:52.215 21:35:41 -- common/autotest_common.sh@10 -- # set +x 00:32:52.215 21:35:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:52.215 21:35:41 -- host/mdns_discovery.sh@139 -- # sleep 1 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:53.593 21:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:53.593 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@64 -- # sort 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@64 -- # xargs 00:32:53.593 21:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:53.593 21:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:53.593 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:32:53.593 21:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:53.593 21:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:53.593 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:32:53.593 [2024-04-26 21:35:42.557320] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:53.593 [2024-04-26 21:35:42.558397] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:53.593 [2024-04-26 21:35:42.558504] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:53.593 [2024-04-26 21:35:42.558580] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:32:53.593 [2024-04-26 21:35:42.558626] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:53.593 21:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:32:53.593 21:35:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:53.593 21:35:42 -- common/autotest_common.sh@10 -- # set +x 00:32:53.593 [2024-04-26 21:35:42.569242] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:32:53.593 [2024-04-26 21:35:42.569379] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:32:53.593 [2024-04-26 21:35:42.570367] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:53.593 21:35:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:53.593 21:35:42 -- host/mdns_discovery.sh@149 -- # sleep 1 00:32:53.593 [2024-04-26 21:35:42.702240] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:32:53.593 [2024-04-26 21:35:42.703214] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:32:53.593 [2024-04-26 21:35:42.762304] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:32:53.593 [2024-04-26 21:35:42.762342] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:53.593 [2024-04-26 21:35:42.762346] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:53.593 [2024-04-26 21:35:42.762376] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:53.593 [2024-04-26 21:35:42.762405] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:32:53.593 [2024-04-26 21:35:42.762433] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:32:53.593 [2024-04-26 21:35:42.762438] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:53.593 [2024-04-26 21:35:42.762457] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:53.593 [2024-04-26 21:35:42.808107] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:53.593 [2024-04-26 21:35:42.808129] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:53.593 [2024-04-26 21:35:42.808156] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:32:53.593 [2024-04-26 21:35:42.808160] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@68 -- # sort 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:32:54.530 21:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.530 21:35:43 -- common/autotest_common.sh@10 -- # set +x 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@68 -- # xargs 00:32:54.530 21:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:32:54.530 21:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@64 -- # xargs 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@64 -- # sort 00:32:54.530 21:35:43 -- common/autotest_common.sh@10 -- # set +x 00:32:54.530 21:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@72 -- # sort -n 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@72 -- # xargs 00:32:54.530 21:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.530 21:35:43 -- common/autotest_common.sh@10 -- # set +x 00:32:54.530 21:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:32:54.530 21:35:43 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:32:54.531 21:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.531 21:35:43 -- common/autotest_common.sh@10 -- # set +x 00:32:54.531 21:35:43 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:54.531 21:35:43 -- host/mdns_discovery.sh@72 -- # sort -n 00:32:54.531 21:35:43 -- host/mdns_discovery.sh@72 -- # xargs 00:32:54.531 21:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:32:54.792 21:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.792 21:35:43 -- common/autotest_common.sh@10 -- # set +x 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:32:54.792 21:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:54.792 21:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.792 21:35:43 -- common/autotest_common.sh@10 -- # set +x 00:32:54.792 [2024-04-26 21:35:43.875817] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:54.792 [2024-04-26 21:35:43.875848] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:54.792 [2024-04-26 21:35:43.875870] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:32:54.792 [2024-04-26 21:35:43.875878] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:54.792 [2024-04-26 21:35:43.876437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.792 [2024-04-26 21:35:43.876458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.792 [2024-04-26 21:35:43.876465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.792 [2024-04-26 21:35:43.876471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.792 [2024-04-26 21:35:43.876476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.792 [2024-04-26 21:35:43.876482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.792 [2024-04-26 21:35:43.876488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.792 [2024-04-26 21:35:43.876493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.792 [2024-04-26 21:35:43.876498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.792 21:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:32:54.792 21:35:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:54.792 21:35:43 -- common/autotest_common.sh@10 -- # set +x 00:32:54.792 [2024-04-26 21:35:43.886358] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.792 [2024-04-26 21:35:43.887795] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:54.792 [2024-04-26 21:35:43.887830] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:32:54.792 21:35:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:54.792 [2024-04-26 21:35:43.892460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.792 [2024-04-26 21:35:43.892480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.792 [2024-04-26 21:35:43.892487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.792 [2024-04-26 21:35:43.892492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.792 [2024-04-26 21:35:43.892498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.792 [2024-04-26 21:35:43.892504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.792 [2024-04-26 21:35:43.892510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.792 [2024-04-26 21:35:43.892515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.792 [2024-04-26 21:35:43.892521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.792 21:35:43 -- host/mdns_discovery.sh@162 -- # sleep 1 00:32:54.792 [2024-04-26 21:35:43.896363] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.792 [2024-04-26 21:35:43.896455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.896481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.896490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.792 [2024-04-26 21:35:43.896497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.792 [2024-04-26 21:35:43.896507] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.792 [2024-04-26 21:35:43.896516] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.792 [2024-04-26 21:35:43.896521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.792 [2024-04-26 21:35:43.896528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.792 [2024-04-26 21:35:43.896538] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.792 [2024-04-26 21:35:43.902418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.792 [2024-04-26 21:35:43.906382] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.792 [2024-04-26 21:35:43.906477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.906502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.906509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.792 [2024-04-26 21:35:43.906516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.792 [2024-04-26 21:35:43.906525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.792 [2024-04-26 21:35:43.906533] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.792 [2024-04-26 21:35:43.906538] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.792 [2024-04-26 21:35:43.906544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.792 [2024-04-26 21:35:43.906553] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.792 [2024-04-26 21:35:43.912405] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.792 [2024-04-26 21:35:43.912453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.912475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.912482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.792 [2024-04-26 21:35:43.912488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.792 [2024-04-26 21:35:43.912496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.792 [2024-04-26 21:35:43.912504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.792 [2024-04-26 21:35:43.912509] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.792 [2024-04-26 21:35:43.912514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.792 [2024-04-26 21:35:43.912522] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.792 [2024-04-26 21:35:43.916412] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.792 [2024-04-26 21:35:43.916456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.916478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.916485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.792 [2024-04-26 21:35:43.916490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.792 [2024-04-26 21:35:43.916498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.792 [2024-04-26 21:35:43.916505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.792 [2024-04-26 21:35:43.916510] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.792 [2024-04-26 21:35:43.916515] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.792 [2024-04-26 21:35:43.916523] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.792 [2024-04-26 21:35:43.922418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.792 [2024-04-26 21:35:43.922531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.922592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.792 [2024-04-26 21:35:43.922626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.792 [2024-04-26 21:35:43.922672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.922716] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.922796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.922834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.922883] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.793 [2024-04-26 21:35:43.922926] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.793 [2024-04-26 21:35:43.926425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.793 [2024-04-26 21:35:43.926520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.926572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.926605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.793 [2024-04-26 21:35:43.926644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.926699] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.926740] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.926778] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.926823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.793 [2024-04-26 21:35:43.926853] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.793 [2024-04-26 21:35:43.932485] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.793 [2024-04-26 21:35:43.932593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.932650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.932682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.793 [2024-04-26 21:35:43.932722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.932812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.932856] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.932896] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.932943] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.793 [2024-04-26 21:35:43.933036] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.793 [2024-04-26 21:35:43.936480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.793 [2024-04-26 21:35:43.936537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.936561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.936569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.793 [2024-04-26 21:35:43.936575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.936584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.936592] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.936597] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.936602] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.793 [2024-04-26 21:35:43.936611] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.793 [2024-04-26 21:35:43.942536] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.793 [2024-04-26 21:35:43.942590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.942615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.942623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.793 [2024-04-26 21:35:43.942629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.942640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.942649] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.942654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.942660] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.793 [2024-04-26 21:35:43.942670] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.793 [2024-04-26 21:35:43.946500] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.793 [2024-04-26 21:35:43.946610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.946686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.946720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.793 [2024-04-26 21:35:43.946758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.946806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.946853] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.946889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.946925] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.793 [2024-04-26 21:35:43.946982] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.793 [2024-04-26 21:35:43.952551] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.793 [2024-04-26 21:35:43.952650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.952702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.952731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.793 [2024-04-26 21:35:43.952766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.952849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.952903] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.952938] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.952971] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.793 [2024-04-26 21:35:43.953002] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.793 [2024-04-26 21:35:43.956563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.793 [2024-04-26 21:35:43.956660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.956711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.956738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.793 [2024-04-26 21:35:43.956773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.956854] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.956901] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.956934] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.956966] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.793 [2024-04-26 21:35:43.956976] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.793 [2024-04-26 21:35:43.962595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.793 [2024-04-26 21:35:43.962665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.962692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.962701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.793 [2024-04-26 21:35:43.962707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.962717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.962726] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.962731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.962737] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.793 [2024-04-26 21:35:43.962746] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.793 [2024-04-26 21:35:43.966608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.793 [2024-04-26 21:35:43.966658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.966682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.793 [2024-04-26 21:35:43.966690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.793 [2024-04-26 21:35:43.966697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.793 [2024-04-26 21:35:43.966706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.793 [2024-04-26 21:35:43.966715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.793 [2024-04-26 21:35:43.966721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.793 [2024-04-26 21:35:43.966726] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.793 [2024-04-26 21:35:43.966735] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:43.972607] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.794 [2024-04-26 21:35:43.972651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.972673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.972680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.794 [2024-04-26 21:35:43.972685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.794 [2024-04-26 21:35:43.972693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.794 [2024-04-26 21:35:43.972700] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.794 [2024-04-26 21:35:43.972705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.794 [2024-04-26 21:35:43.972710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.794 [2024-04-26 21:35:43.972717] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:43.976622] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.794 [2024-04-26 21:35:43.976675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.976699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.976707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.794 [2024-04-26 21:35:43.976712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.794 [2024-04-26 21:35:43.976721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.794 [2024-04-26 21:35:43.976729] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.794 [2024-04-26 21:35:43.976734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.794 [2024-04-26 21:35:43.976739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.794 [2024-04-26 21:35:43.976747] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:43.982622] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.794 [2024-04-26 21:35:43.982682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.982709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.982718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.794 [2024-04-26 21:35:43.982724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.794 [2024-04-26 21:35:43.982734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.794 [2024-04-26 21:35:43.982743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.794 [2024-04-26 21:35:43.982748] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.794 [2024-04-26 21:35:43.982754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.794 [2024-04-26 21:35:43.982764] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:43.986641] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.794 [2024-04-26 21:35:43.986694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.986719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.986727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.794 [2024-04-26 21:35:43.986733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.794 [2024-04-26 21:35:43.986743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.794 [2024-04-26 21:35:43.986752] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.794 [2024-04-26 21:35:43.986757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.794 [2024-04-26 21:35:43.986763] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.794 [2024-04-26 21:35:43.986773] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:43.992645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.794 [2024-04-26 21:35:43.992764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.992828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.992866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.794 [2024-04-26 21:35:43.992909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.794 [2024-04-26 21:35:43.992953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.794 [2024-04-26 21:35:43.993041] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.794 [2024-04-26 21:35:43.993084] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.794 [2024-04-26 21:35:43.993137] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.794 [2024-04-26 21:35:43.993170] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:43.996659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.794 [2024-04-26 21:35:43.996772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.996832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:43.996868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.794 [2024-04-26 21:35:43.996916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.794 [2024-04-26 21:35:43.996960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.794 [2024-04-26 21:35:43.997063] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.794 [2024-04-26 21:35:43.997106] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.794 [2024-04-26 21:35:43.997162] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.794 [2024-04-26 21:35:43.997194] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:44.002704] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.794 [2024-04-26 21:35:44.002792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:44.002820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:44.002829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.794 [2024-04-26 21:35:44.002835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.794 [2024-04-26 21:35:44.002845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.794 [2024-04-26 21:35:44.002854] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.794 [2024-04-26 21:35:44.002860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.794 [2024-04-26 21:35:44.002866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.794 [2024-04-26 21:35:44.002875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:44.006715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.794 [2024-04-26 21:35:44.006771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:44.006798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:44.006808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.794 [2024-04-26 21:35:44.006814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.794 [2024-04-26 21:35:44.006824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.794 [2024-04-26 21:35:44.006834] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.794 [2024-04-26 21:35:44.006840] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.794 [2024-04-26 21:35:44.006846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.794 [2024-04-26 21:35:44.006856] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:44.012753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:32:54.794 [2024-04-26 21:35:44.012817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:44.012841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:44.012848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbcbab0 with addr=10.0.0.3, port=4420 00:32:54.794 [2024-04-26 21:35:44.012854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbcbab0 is same with the state(5) to be set 00:32:54.794 [2024-04-26 21:35:44.012864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbab0 (9): Bad file descriptor 00:32:54.794 [2024-04-26 21:35:44.012872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:32:54.794 [2024-04-26 21:35:44.012877] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:32:54.794 [2024-04-26 21:35:44.012882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:32:54.794 [2024-04-26 21:35:44.012891] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.794 [2024-04-26 21:35:44.016731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:54.794 [2024-04-26 21:35:44.016795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:44.016820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:54.794 [2024-04-26 21:35:44.016829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb8cd0 with addr=10.0.0.2, port=4420 00:32:54.794 [2024-04-26 21:35:44.016835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb8cd0 is same with the state(5) to be set 00:32:54.795 [2024-04-26 21:35:44.016845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb8cd0 (9): Bad file descriptor 00:32:54.795 [2024-04-26 21:35:44.016853] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.795 [2024-04-26 21:35:44.016859] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:54.795 [2024-04-26 21:35:44.016865] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.795 [2024-04-26 21:35:44.016874] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:54.795 [2024-04-26 21:35:44.018969] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:54.795 [2024-04-26 21:35:44.018990] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:54.795 [2024-04-26 21:35:44.019016] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:54.795 [2024-04-26 21:35:44.019039] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:32:54.795 [2024-04-26 21:35:44.019050] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:54.795 [2024-04-26 21:35:44.019059] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:55.054 [2024-04-26 21:35:44.104888] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:55.054 [2024-04-26 21:35:44.104949] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:55.992 21:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:55.992 21:35:44 -- common/autotest_common.sh@10 -- # set +x 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@68 -- # sort 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@68 -- # xargs 00:32:55.992 21:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:55.992 21:35:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:55.992 21:35:44 -- common/autotest_common.sh@10 -- # set +x 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@64 -- # sort 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@64 -- # xargs 00:32:55.992 21:35:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:32:55.992 21:35:44 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@72 -- # sort -n 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@72 -- # xargs 00:32:55.992 21:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:55.992 21:35:45 -- common/autotest_common.sh@10 -- # set +x 00:32:55.992 21:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@72 -- # sort -n 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@72 -- # xargs 00:32:55.992 21:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:55.992 21:35:45 -- common/autotest_common.sh@10 -- # set +x 00:32:55.992 21:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:32:55.992 21:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:55.992 21:35:45 -- common/autotest_common.sh@10 -- # set +x 00:32:55.992 21:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:32:55.992 21:35:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:55.992 21:35:45 -- common/autotest_common.sh@10 -- # set +x 00:32:55.992 21:35:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:55.992 21:35:45 -- host/mdns_discovery.sh@172 -- # sleep 1 00:32:55.992 [2024-04-26 21:35:45.155961] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:32:56.928 21:35:46 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:32:56.928 21:35:46 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:32:56.928 21:35:46 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:32:56.928 21:35:46 -- host/mdns_discovery.sh@80 -- # sort 00:32:56.928 21:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:56.928 21:35:46 -- host/mdns_discovery.sh@80 -- # xargs 00:32:56.928 21:35:46 -- common/autotest_common.sh@10 -- # set +x 00:32:56.928 21:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:57.204 21:35:46 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:32:57.204 21:35:46 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:32:57.204 21:35:46 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.204 21:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:57.204 21:35:46 -- common/autotest_common.sh@10 -- # set +x 00:32:57.204 21:35:46 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:32:57.204 21:35:46 -- host/mdns_discovery.sh@68 -- # sort 00:32:57.204 21:35:46 -- host/mdns_discovery.sh@68 -- # xargs 00:32:57.204 21:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:57.204 21:35:46 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:32:57.204 21:35:46 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:32:57.204 21:35:46 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.204 21:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:57.204 21:35:46 -- common/autotest_common.sh@10 -- # set +x 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@64 -- # xargs 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@64 -- # sort 00:32:57.205 21:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:32:57.205 21:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:57.205 21:35:46 -- common/autotest_common.sh@10 -- # set +x 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:32:57.205 21:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:32:57.205 21:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:57.205 21:35:46 -- common/autotest_common.sh@10 -- # set +x 00:32:57.205 21:35:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:32:57.205 21:35:46 -- common/autotest_common.sh@638 -- # local es=0 00:32:57.205 21:35:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:32:57.205 21:35:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:32:57.205 21:35:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:57.205 21:35:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:32:57.205 21:35:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:57.205 21:35:46 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:32:57.205 21:35:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:57.205 21:35:46 -- common/autotest_common.sh@10 -- # set +x 00:32:57.205 [2024-04-26 21:35:46.395014] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:32:57.205 2024/04/26 21:35:46 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:32:57.205 request: 00:32:57.205 { 00:32:57.205 "method": "bdev_nvme_start_mdns_discovery", 00:32:57.205 "params": { 00:32:57.205 "name": "mdns", 00:32:57.205 "svcname": "_nvme-disc._http", 00:32:57.205 "hostnqn": "nqn.2021-12.io.spdk:test" 00:32:57.205 } 00:32:57.205 } 00:32:57.205 Got JSON-RPC error response 00:32:57.205 GoRPCClient: error on JSON-RPC call 00:32:57.205 21:35:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:32:57.205 21:35:46 -- common/autotest_common.sh@641 -- # es=1 00:32:57.205 21:35:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:57.205 21:35:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:57.205 21:35:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:57.205 21:35:46 -- host/mdns_discovery.sh@183 -- # sleep 5 00:32:57.785 [2024-04-26 21:35:46.779107] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:32:57.785 [2024-04-26 21:35:46.878910] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:32:57.785 [2024-04-26 21:35:46.978719] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:32:57.785 [2024-04-26 21:35:46.978739] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:32:57.785 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:32:57.785 cookie is 0 00:32:57.785 is_local: 1 00:32:57.785 our_own: 0 00:32:57.785 wide_area: 0 00:32:57.785 multicast: 1 00:32:57.785 cached: 1 00:32:58.043 [2024-04-26 21:35:47.078535] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:32:58.043 [2024-04-26 21:35:47.078564] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:32:58.043 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:32:58.043 cookie is 0 00:32:58.043 is_local: 1 00:32:58.043 our_own: 0 00:32:58.043 wide_area: 0 00:32:58.043 multicast: 1 00:32:58.043 cached: 1 00:32:58.981 [2024-04-26 21:35:47.980321] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:32:58.981 [2024-04-26 21:35:47.980360] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:32:58.981 [2024-04-26 21:35:47.980373] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:32:58.981 [2024-04-26 21:35:48.067268] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:32:58.981 [2024-04-26 21:35:48.080024] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:58.981 [2024-04-26 21:35:48.080049] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:58.981 [2024-04-26 21:35:48.080062] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:58.981 [2024-04-26 21:35:48.127723] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:32:58.981 [2024-04-26 21:35:48.127762] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:32:58.981 [2024-04-26 21:35:48.165850] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:32:58.981 [2024-04-26 21:35:48.224844] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:32:58.981 [2024-04-26 21:35:48.224889] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:33:02.271 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:02.271 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@80 -- # sort 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@80 -- # xargs 00:33:02.271 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@76 -- # sort 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:33:02.271 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@76 -- # xargs 00:33:02.271 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:33:02.271 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@64 -- # sort 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@64 -- # xargs 00:33:02.271 21:35:51 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.271 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:02.271 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:33:02.542 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:02.542 21:35:51 -- common/autotest_common.sh@638 -- # local es=0 00:33:02.542 21:35:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:02.542 21:35:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:33:02.542 21:35:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:02.542 21:35:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:33:02.542 21:35:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:02.542 21:35:51 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:02.542 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:02.542 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:33:02.542 [2024-04-26 21:35:51.583517] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:33:02.542 2024/04/26 21:35:51 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:33:02.542 request: 00:33:02.542 { 00:33:02.542 "method": "bdev_nvme_start_mdns_discovery", 00:33:02.542 "params": { 00:33:02.542 "name": "cdc", 00:33:02.542 "svcname": "_nvme-disc._tcp", 00:33:02.542 "hostnqn": "nqn.2021-12.io.spdk:test" 00:33:02.542 } 00:33:02.542 } 00:33:02.542 Got JSON-RPC error response 00:33:02.542 GoRPCClient: error on JSON-RPC call 00:33:02.542 21:35:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:33:02.542 21:35:51 -- common/autotest_common.sh@641 -- # es=1 00:33:02.542 21:35:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:02.542 21:35:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:02.542 21:35:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:02.542 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:33:02.542 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@76 -- # sort 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@76 -- # xargs 00:33:02.542 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@64 -- # xargs 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@64 -- # sort 00:33:02.542 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:02.542 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:33:02.542 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:02.542 21:35:51 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:33:02.542 21:35:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:02.542 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:33:02.542 21:35:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:02.543 21:35:51 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:33:02.543 21:35:51 -- host/mdns_discovery.sh@197 -- # kill 105426 00:33:02.543 21:35:51 -- host/mdns_discovery.sh@200 -- # wait 105426 00:33:02.831 [2024-04-26 21:35:51.810194] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:33:02.831 21:35:51 -- host/mdns_discovery.sh@201 -- # kill 105507 00:33:02.831 Got SIGTERM, quitting. 00:33:02.831 21:35:51 -- host/mdns_discovery.sh@202 -- # kill 105455 00:33:02.831 21:35:51 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:33:02.831 21:35:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:02.831 21:35:51 -- nvmf/common.sh@117 -- # sync 00:33:02.831 Got SIGTERM, quitting. 00:33:02.831 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:33:02.831 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:33:02.831 avahi-daemon 0.8 exiting. 00:33:02.831 21:35:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:02.831 21:35:51 -- nvmf/common.sh@120 -- # set +e 00:33:02.831 21:35:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:02.831 21:35:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:02.831 rmmod nvme_tcp 00:33:02.831 rmmod nvme_fabrics 00:33:02.831 rmmod nvme_keyring 00:33:02.831 21:35:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:02.831 21:35:52 -- nvmf/common.sh@124 -- # set -e 00:33:02.831 21:35:52 -- nvmf/common.sh@125 -- # return 0 00:33:02.831 21:35:52 -- nvmf/common.sh@478 -- # '[' -n 105376 ']' 00:33:02.831 21:35:52 -- nvmf/common.sh@479 -- # killprocess 105376 00:33:02.831 21:35:52 -- common/autotest_common.sh@936 -- # '[' -z 105376 ']' 00:33:02.831 21:35:52 -- common/autotest_common.sh@940 -- # kill -0 105376 00:33:02.831 21:35:52 -- common/autotest_common.sh@941 -- # uname 00:33:02.831 21:35:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:02.831 21:35:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 105376 00:33:02.831 killing process with pid 105376 00:33:02.831 21:35:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:02.831 21:35:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:02.831 21:35:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 105376' 00:33:02.831 21:35:52 -- common/autotest_common.sh@955 -- # kill 105376 00:33:02.831 21:35:52 -- common/autotest_common.sh@960 -- # wait 105376 00:33:03.091 21:35:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:33:03.091 21:35:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:03.091 21:35:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:03.091 21:35:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:03.091 21:35:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:03.091 21:35:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.091 21:35:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.091 21:35:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.091 21:35:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:03.091 00:33:03.091 real 0m20.477s 00:33:03.091 user 0m39.769s 00:33:03.091 sys 0m2.054s 00:33:03.091 21:35:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:03.091 21:35:52 -- common/autotest_common.sh@10 -- # set +x 00:33:03.091 ************************************ 00:33:03.091 END TEST nvmf_mdns_discovery 00:33:03.091 ************************************ 00:33:03.351 21:35:52 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:33:03.351 21:35:52 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:33:03.351 21:35:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:33:03.351 21:35:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:03.351 21:35:52 -- common/autotest_common.sh@10 -- # set +x 00:33:03.351 ************************************ 00:33:03.351 START TEST nvmf_multipath 00:33:03.351 ************************************ 00:33:03.351 21:35:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:33:03.351 * Looking for test storage... 00:33:03.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:03.611 21:35:52 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:03.611 21:35:52 -- nvmf/common.sh@7 -- # uname -s 00:33:03.611 21:35:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:03.611 21:35:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:03.611 21:35:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:03.611 21:35:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:03.611 21:35:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:03.611 21:35:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:03.611 21:35:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:03.611 21:35:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:03.611 21:35:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:03.611 21:35:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:03.611 21:35:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:33:03.611 21:35:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:33:03.611 21:35:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:03.611 21:35:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:03.611 21:35:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:03.611 21:35:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:03.611 21:35:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:03.611 21:35:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:03.611 21:35:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:03.611 21:35:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:03.611 21:35:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.611 21:35:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.612 21:35:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.612 21:35:52 -- paths/export.sh@5 -- # export PATH 00:33:03.612 21:35:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:03.612 21:35:52 -- nvmf/common.sh@47 -- # : 0 00:33:03.612 21:35:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:03.612 21:35:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:03.612 21:35:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:03.612 21:35:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:03.612 21:35:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:03.612 21:35:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:03.612 21:35:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:03.612 21:35:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:03.612 21:35:52 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:03.612 21:35:52 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:03.612 21:35:52 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:03.612 21:35:52 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:33:03.612 21:35:52 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:03.612 21:35:52 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:03.612 21:35:52 -- host/multipath.sh@30 -- # nvmftestinit 00:33:03.612 21:35:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:33:03.612 21:35:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:03.612 21:35:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:33:03.612 21:35:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:33:03.612 21:35:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:33:03.612 21:35:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:03.612 21:35:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:03.612 21:35:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:03.612 21:35:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:33:03.612 21:35:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:33:03.612 21:35:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:33:03.612 21:35:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:33:03.612 21:35:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:33:03.612 21:35:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:33:03.612 21:35:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.612 21:35:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.612 21:35:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:03.612 21:35:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:03.612 21:35:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:03.612 21:35:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:03.612 21:35:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:03.612 21:35:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.612 21:35:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:03.612 21:35:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:03.612 21:35:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:03.612 21:35:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:03.612 21:35:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:03.612 21:35:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:03.612 Cannot find device "nvmf_tgt_br" 00:33:03.612 21:35:52 -- nvmf/common.sh@155 -- # true 00:33:03.612 21:35:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:03.612 Cannot find device "nvmf_tgt_br2" 00:33:03.612 21:35:52 -- nvmf/common.sh@156 -- # true 00:33:03.612 21:35:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:03.612 21:35:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:03.612 Cannot find device "nvmf_tgt_br" 00:33:03.612 21:35:52 -- nvmf/common.sh@158 -- # true 00:33:03.612 21:35:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:03.612 Cannot find device "nvmf_tgt_br2" 00:33:03.612 21:35:52 -- nvmf/common.sh@159 -- # true 00:33:03.612 21:35:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:03.612 21:35:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:03.612 21:35:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:03.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:03.612 21:35:52 -- nvmf/common.sh@162 -- # true 00:33:03.612 21:35:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:03.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:03.612 21:35:52 -- nvmf/common.sh@163 -- # true 00:33:03.612 21:35:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:03.612 21:35:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:03.612 21:35:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:03.612 21:35:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:03.612 21:35:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:03.871 21:35:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:03.871 21:35:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:03.871 21:35:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:03.871 21:35:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:03.871 21:35:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:03.871 21:35:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:03.871 21:35:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:03.871 21:35:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:03.871 21:35:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:03.871 21:35:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:03.871 21:35:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:03.871 21:35:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:03.871 21:35:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:03.871 21:35:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:03.871 21:35:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:03.871 21:35:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:03.871 21:35:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:03.871 21:35:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:03.871 21:35:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:03.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:33:03.871 00:33:03.871 --- 10.0.0.2 ping statistics --- 00:33:03.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.871 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:33:03.871 21:35:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:03.871 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:03.871 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:33:03.871 00:33:03.871 --- 10.0.0.3 ping statistics --- 00:33:03.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.871 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:33:03.871 21:35:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:03.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:33:03.871 00:33:03.871 --- 10.0.0.1 ping statistics --- 00:33:03.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.871 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:33:03.871 21:35:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.871 21:35:52 -- nvmf/common.sh@422 -- # return 0 00:33:03.871 21:35:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:33:03.871 21:35:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.871 21:35:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:33:03.871 21:35:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:33:03.871 21:35:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.871 21:35:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:33:03.871 21:35:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:33:03.871 21:35:53 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:33:03.871 21:35:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:33:03.871 21:35:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:03.871 21:35:53 -- common/autotest_common.sh@10 -- # set +x 00:33:03.871 21:35:53 -- nvmf/common.sh@470 -- # nvmfpid=106028 00:33:03.871 21:35:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:03.871 21:35:53 -- nvmf/common.sh@471 -- # waitforlisten 106028 00:33:03.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.871 21:35:53 -- common/autotest_common.sh@817 -- # '[' -z 106028 ']' 00:33:03.871 21:35:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.871 21:35:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:03.871 21:35:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.871 21:35:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:03.871 21:35:53 -- common/autotest_common.sh@10 -- # set +x 00:33:03.871 [2024-04-26 21:35:53.079466] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:33:03.871 [2024-04-26 21:35:53.079534] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.138 [2024-04-26 21:35:53.218795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:04.138 [2024-04-26 21:35:53.277241] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.138 [2024-04-26 21:35:53.277410] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.138 [2024-04-26 21:35:53.277472] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.138 [2024-04-26 21:35:53.277519] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.138 [2024-04-26 21:35:53.277538] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.138 [2024-04-26 21:35:53.278603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.138 [2024-04-26 21:35:53.278604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.076 21:35:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:05.076 21:35:53 -- common/autotest_common.sh@850 -- # return 0 00:33:05.076 21:35:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:33:05.076 21:35:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:05.076 21:35:53 -- common/autotest_common.sh@10 -- # set +x 00:33:05.076 21:35:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.076 21:35:54 -- host/multipath.sh@33 -- # nvmfapp_pid=106028 00:33:05.076 21:35:54 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:05.076 [2024-04-26 21:35:54.219756] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.076 21:35:54 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:05.336 Malloc0 00:33:05.336 21:35:54 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:05.595 21:35:54 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:05.854 21:35:54 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:06.113 [2024-04-26 21:35:55.138608] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:06.113 21:35:55 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:06.113 [2024-04-26 21:35:55.358263] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:06.372 21:35:55 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:06.372 21:35:55 -- host/multipath.sh@44 -- # bdevperf_pid=106126 00:33:06.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:06.372 21:35:55 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:06.372 21:35:55 -- host/multipath.sh@47 -- # waitforlisten 106126 /var/tmp/bdevperf.sock 00:33:06.372 21:35:55 -- common/autotest_common.sh@817 -- # '[' -z 106126 ']' 00:33:06.372 21:35:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:06.372 21:35:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:06.372 21:35:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:06.372 21:35:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:06.372 21:35:55 -- common/autotest_common.sh@10 -- # set +x 00:33:07.308 21:35:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:07.308 21:35:56 -- common/autotest_common.sh@850 -- # return 0 00:33:07.308 21:35:56 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:07.566 21:35:56 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:07.825 Nvme0n1 00:33:07.825 21:35:56 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:08.084 Nvme0n1 00:33:08.084 21:35:57 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:08.084 21:35:57 -- host/multipath.sh@78 -- # sleep 1 00:33:09.019 21:35:58 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:33:09.019 21:35:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:09.278 21:35:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:09.537 21:35:58 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:33:09.537 21:35:58 -- host/multipath.sh@65 -- # dtrace_pid=106219 00:33:09.537 21:35:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:09.537 21:35:58 -- host/multipath.sh@66 -- # sleep 6 00:33:16.102 21:36:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:16.102 21:36:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:16.102 21:36:04 -- host/multipath.sh@67 -- # active_port=4421 00:33:16.102 21:36:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:16.102 Attaching 4 probes... 00:33:16.102 @path[10.0.0.2, 4421]: 18857 00:33:16.102 @path[10.0.0.2, 4421]: 19586 00:33:16.102 @path[10.0.0.2, 4421]: 18727 00:33:16.102 @path[10.0.0.2, 4421]: 18055 00:33:16.102 @path[10.0.0.2, 4421]: 18093 00:33:16.102 21:36:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:16.102 21:36:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:16.102 21:36:04 -- host/multipath.sh@69 -- # sed -n 1p 00:33:16.102 21:36:04 -- host/multipath.sh@69 -- # port=4421 00:33:16.102 21:36:04 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:16.102 21:36:04 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:16.102 21:36:04 -- host/multipath.sh@72 -- # kill 106219 00:33:16.102 21:36:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:16.102 21:36:04 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:33:16.102 21:36:04 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:16.102 21:36:05 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:16.361 21:36:05 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:33:16.361 21:36:05 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:16.361 21:36:05 -- host/multipath.sh@65 -- # dtrace_pid=106344 00:33:16.361 21:36:05 -- host/multipath.sh@66 -- # sleep 6 00:33:22.930 21:36:11 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:22.930 21:36:11 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:33:22.930 21:36:11 -- host/multipath.sh@67 -- # active_port=4420 00:33:22.930 21:36:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:22.930 Attaching 4 probes... 00:33:22.930 @path[10.0.0.2, 4420]: 17936 00:33:22.930 @path[10.0.0.2, 4420]: 18092 00:33:22.930 @path[10.0.0.2, 4420]: 19244 00:33:22.930 @path[10.0.0.2, 4420]: 19579 00:33:22.930 @path[10.0.0.2, 4420]: 19453 00:33:22.930 21:36:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:22.930 21:36:11 -- host/multipath.sh@69 -- # sed -n 1p 00:33:22.930 21:36:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:22.930 21:36:11 -- host/multipath.sh@69 -- # port=4420 00:33:22.930 21:36:11 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:33:22.930 21:36:11 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:33:22.930 21:36:11 -- host/multipath.sh@72 -- # kill 106344 00:33:22.931 21:36:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:22.931 21:36:11 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:33:22.931 21:36:11 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:22.931 21:36:11 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:22.931 21:36:12 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:33:22.931 21:36:12 -- host/multipath.sh@65 -- # dtrace_pid=106481 00:33:22.931 21:36:12 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:22.931 21:36:12 -- host/multipath.sh@66 -- # sleep 6 00:33:29.497 21:36:18 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:29.497 21:36:18 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:29.497 21:36:18 -- host/multipath.sh@67 -- # active_port=4421 00:33:29.497 21:36:18 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:29.497 Attaching 4 probes... 00:33:29.497 @path[10.0.0.2, 4421]: 14808 00:33:29.497 @path[10.0.0.2, 4421]: 17351 00:33:29.497 @path[10.0.0.2, 4421]: 17046 00:33:29.497 @path[10.0.0.2, 4421]: 17774 00:33:29.497 @path[10.0.0.2, 4421]: 18176 00:33:29.497 21:36:18 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:29.497 21:36:18 -- host/multipath.sh@69 -- # sed -n 1p 00:33:29.497 21:36:18 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:29.497 21:36:18 -- host/multipath.sh@69 -- # port=4421 00:33:29.497 21:36:18 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:29.497 21:36:18 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:29.497 21:36:18 -- host/multipath.sh@72 -- # kill 106481 00:33:29.497 21:36:18 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:29.497 21:36:18 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:33:29.497 21:36:18 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:29.497 21:36:18 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:29.757 21:36:18 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:33:29.757 21:36:18 -- host/multipath.sh@65 -- # dtrace_pid=106607 00:33:29.757 21:36:18 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:29.757 21:36:18 -- host/multipath.sh@66 -- # sleep 6 00:33:36.325 21:36:24 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:36.325 21:36:24 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:33:36.325 21:36:25 -- host/multipath.sh@67 -- # active_port= 00:33:36.325 21:36:25 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:36.325 Attaching 4 probes... 00:33:36.325 00:33:36.325 00:33:36.325 00:33:36.325 00:33:36.325 00:33:36.325 21:36:25 -- host/multipath.sh@69 -- # sed -n 1p 00:33:36.325 21:36:25 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:36.325 21:36:25 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:36.325 21:36:25 -- host/multipath.sh@69 -- # port= 00:33:36.325 21:36:25 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:33:36.325 21:36:25 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:33:36.325 21:36:25 -- host/multipath.sh@72 -- # kill 106607 00:33:36.325 21:36:25 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:36.325 21:36:25 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:33:36.325 21:36:25 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:36.325 21:36:25 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:36.325 21:36:25 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:33:36.325 21:36:25 -- host/multipath.sh@65 -- # dtrace_pid=106739 00:33:36.325 21:36:25 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:36.325 21:36:25 -- host/multipath.sh@66 -- # sleep 6 00:33:42.878 21:36:31 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:42.878 21:36:31 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:33:42.878 21:36:31 -- host/multipath.sh@67 -- # active_port=4421 00:33:42.879 21:36:31 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:42.879 Attaching 4 probes... 00:33:42.879 @path[10.0.0.2, 4421]: 18985 00:33:42.879 @path[10.0.0.2, 4421]: 19678 00:33:42.879 @path[10.0.0.2, 4421]: 19382 00:33:42.879 @path[10.0.0.2, 4421]: 19758 00:33:42.879 @path[10.0.0.2, 4421]: 20235 00:33:42.879 21:36:31 -- host/multipath.sh@69 -- # sed -n 1p 00:33:42.879 21:36:31 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:42.879 21:36:31 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:42.879 21:36:31 -- host/multipath.sh@69 -- # port=4421 00:33:42.879 21:36:31 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:33:42.879 21:36:31 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:33:42.879 21:36:31 -- host/multipath.sh@72 -- # kill 106739 00:33:42.879 21:36:31 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:42.879 21:36:31 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:42.879 [2024-04-26 21:36:31.996440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996541] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.996995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.997003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.997010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.879 [2024-04-26 21:36:31.997019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 [2024-04-26 21:36:31.997385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cdfb0 is same with the state(5) to be set 00:33:42.880 21:36:32 -- host/multipath.sh@101 -- # sleep 1 00:33:43.810 21:36:33 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:33:43.810 21:36:33 -- host/multipath.sh@65 -- # dtrace_pid=106868 00:33:43.810 21:36:33 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:43.810 21:36:33 -- host/multipath.sh@66 -- # sleep 6 00:33:50.378 21:36:39 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:33:50.378 21:36:39 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:33:50.378 21:36:39 -- host/multipath.sh@67 -- # active_port=4420 00:33:50.378 21:36:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:50.378 Attaching 4 probes... 00:33:50.378 @path[10.0.0.2, 4420]: 19184 00:33:50.378 @path[10.0.0.2, 4420]: 18469 00:33:50.378 @path[10.0.0.2, 4420]: 18924 00:33:50.378 @path[10.0.0.2, 4420]: 18526 00:33:50.379 @path[10.0.0.2, 4420]: 19355 00:33:50.379 21:36:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:33:50.379 21:36:39 -- host/multipath.sh@69 -- # sed -n 1p 00:33:50.379 21:36:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:33:50.379 21:36:39 -- host/multipath.sh@69 -- # port=4420 00:33:50.379 21:36:39 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:33:50.379 21:36:39 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:33:50.379 21:36:39 -- host/multipath.sh@72 -- # kill 106868 00:33:50.379 21:36:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:33:50.379 21:36:39 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:50.379 [2024-04-26 21:36:39.559112] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:50.379 21:36:39 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:50.637 21:36:39 -- host/multipath.sh@111 -- # sleep 6 00:33:57.204 21:36:45 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:33:57.204 21:36:45 -- host/multipath.sh@65 -- # dtrace_pid=107066 00:33:57.204 21:36:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 106028 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:57.204 21:36:45 -- host/multipath.sh@66 -- # sleep 6 00:34:03.780 21:36:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:34:03.780 21:36:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:03.780 21:36:52 -- host/multipath.sh@67 -- # active_port=4421 00:34:03.780 21:36:52 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:03.780 Attaching 4 probes... 00:34:03.780 @path[10.0.0.2, 4421]: 18267 00:34:03.780 @path[10.0.0.2, 4421]: 18392 00:34:03.780 @path[10.0.0.2, 4421]: 17954 00:34:03.780 @path[10.0.0.2, 4421]: 18328 00:34:03.780 @path[10.0.0.2, 4421]: 18512 00:34:03.780 21:36:52 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:03.780 21:36:52 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:03.780 21:36:52 -- host/multipath.sh@69 -- # sed -n 1p 00:34:03.780 21:36:52 -- host/multipath.sh@69 -- # port=4421 00:34:03.780 21:36:52 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:34:03.780 21:36:52 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:34:03.780 21:36:52 -- host/multipath.sh@72 -- # kill 107066 00:34:03.780 21:36:52 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:03.780 21:36:52 -- host/multipath.sh@114 -- # killprocess 106126 00:34:03.780 21:36:52 -- common/autotest_common.sh@936 -- # '[' -z 106126 ']' 00:34:03.780 21:36:52 -- common/autotest_common.sh@940 -- # kill -0 106126 00:34:03.780 21:36:52 -- common/autotest_common.sh@941 -- # uname 00:34:03.780 21:36:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:03.780 21:36:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 106126 00:34:03.780 21:36:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:34:03.780 21:36:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:34:03.780 21:36:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 106126' 00:34:03.780 killing process with pid 106126 00:34:03.780 21:36:52 -- common/autotest_common.sh@955 -- # kill 106126 00:34:03.780 21:36:52 -- common/autotest_common.sh@960 -- # wait 106126 00:34:03.780 Connection closed with partial response: 00:34:03.780 00:34:03.780 00:34:03.780 21:36:52 -- host/multipath.sh@116 -- # wait 106126 00:34:03.780 21:36:52 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:03.780 [2024-04-26 21:35:55.434428] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:03.780 [2024-04-26 21:35:55.434529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106126 ] 00:34:03.780 [2024-04-26 21:35:55.559920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.780 [2024-04-26 21:35:55.633785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:03.780 Running I/O for 90 seconds... 00:34:03.780 [2024-04-26 21:36:05.378487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.780 [2024-04-26 21:36:05.378560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.378982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.378992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.781 [2024-04-26 21:36:05.379677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:03.781 [2024-04-26 21:36:05.379693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.379986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.379996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.380012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.380022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.380038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.380048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.380064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.380074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.380091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.380100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.381868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.782 [2024-04-26 21:36:05.381895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.381918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.381929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.381947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.381958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.381974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.381985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:03.782 [2024-04-26 21:36:05.382523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.782 [2024-04-26 21:36:05.382533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.382988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.783 [2024-04-26 21:36:05.382999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:05.383546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:05.383578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.873979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.873997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.874013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.874024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.874040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.874051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.874067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.874078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.874095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.874105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.874122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.874137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.874154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.874165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.874181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.874192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:03.783 [2024-04-26 21:36:11.874209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.783 [2024-04-26 21:36:11.874219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.874972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.874990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:03.784 [2024-04-26 21:36:11.875485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.784 [2024-04-26 21:36:11.875495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.875973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.875991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.785 [2024-04-26 21:36:11.876000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.876971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.876993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.785 [2024-04-26 21:36:11.877003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:03.785 [2024-04-26 21:36:11.877025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:11.877371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:11.877681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.786 [2024-04-26 21:36:11.877691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.813174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.813241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.813288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.813301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.813316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.813327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.813353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.813363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.813379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.813389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.813404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.813414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.813430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.813440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.813456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.813467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.813987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.786 [2024-04-26 21:36:18.814343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:03.786 [2024-04-26 21:36:18.814361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.814974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.814990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.787 [2024-04-26 21:36:18.814999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.787 [2024-04-26 21:36:18.815025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.787 [2024-04-26 21:36:18.815049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.787 [2024-04-26 21:36:18.815074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.787 [2024-04-26 21:36:18.815098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.787 [2024-04-26 21:36:18.815124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.787 [2024-04-26 21:36:18.815148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.787 [2024-04-26 21:36:18.815177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.787 [2024-04-26 21:36:18.815201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.815225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.815254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.815278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.815302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.815327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.815362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.815386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.787 [2024-04-26 21:36:18.815411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:03.787 [2024-04-26 21:36:18.815425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.815990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.815999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:03.788 [2024-04-26 21:36:18.816655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.788 [2024-04-26 21:36:18.816669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.816968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.816978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.789 [2024-04-26 21:36:18.817338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:18.817367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:18.817400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:18.817427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:18.817455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:18.817482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:18.817508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:18.817526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:18.817536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:31.998141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:31.998196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:31.998219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:31.998231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.789 [2024-04-26 21:36:31.998244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.789 [2024-04-26 21:36:31.998256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.998833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.790 [2024-04-26 21:36:31.998863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.790 [2024-04-26 21:36:31.998896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.790 [2024-04-26 21:36:31.998923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.790 [2024-04-26 21:36:31.998954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.790 [2024-04-26 21:36:31.998983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.998998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.790 [2024-04-26 21:36:31.999012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.790 [2024-04-26 21:36:31.999041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.790 [2024-04-26 21:36:31.999368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.790 [2024-04-26 21:36:31.999378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:31.999986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:31.999999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:32.000029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:32.000064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.791 [2024-04-26 21:36:32.000097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.791 [2024-04-26 21:36:32.000422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.791 [2024-04-26 21:36:32.000434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.000983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.000996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.001007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.001035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.001059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.001082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.792 [2024-04-26 21:36:32.001105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.792 [2024-04-26 21:36:32.001457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.792 [2024-04-26 21:36:32.001471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.793 [2024-04-26 21:36:32.001487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.793 [2024-04-26 21:36:32.001501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.793 [2024-04-26 21:36:32.001517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.793 [2024-04-26 21:36:32.001531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.793 [2024-04-26 21:36:32.001548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.793 [2024-04-26 21:36:32.001561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.793 [2024-04-26 21:36:32.001576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.793 [2024-04-26 21:36:32.001591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.793 [2024-04-26 21:36:32.001631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:03.793 [2024-04-26 21:36:32.001644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:03.793 [2024-04-26 21:36:32.001655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27240 len:8 PRP1 0x0 PRP2 0x0 00:34:03.793 [2024-04-26 21:36:32.001669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.793 [2024-04-26 21:36:32.001727] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e8a00 was disconnected and freed. reset controller. 00:34:03.793 [2024-04-26 21:36:32.003078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.793 [2024-04-26 21:36:32.003167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1486e80 (9): Bad file descriptor 00:34:03.793 [2024-04-26 21:36:32.003275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.793 [2024-04-26 21:36:32.003318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.793 [2024-04-26 21:36:32.003353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1486e80 with addr=10.0.0.2, port=4421 00:34:03.793 [2024-04-26 21:36:32.003368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1486e80 is same with the state(5) to be set 00:34:03.793 [2024-04-26 21:36:32.003395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1486e80 (9): Bad file descriptor 00:34:03.793 [2024-04-26 21:36:32.003415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.793 [2024-04-26 21:36:32.003442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.793 [2024-04-26 21:36:32.003457] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.793 [2024-04-26 21:36:32.003480] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.793 [2024-04-26 21:36:32.003493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.793 [2024-04-26 21:36:42.066164] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:03.793 Received shutdown signal, test time was about 54.868807 seconds 00:34:03.793 00:34:03.793 Latency(us) 00:34:03.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.793 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:03.793 Verification LBA range: start 0x0 length 0x4000 00:34:03.793 Nvme0n1 : 54.87 7948.07 31.05 0.00 0.00 16081.81 515.13 7033243.39 00:34:03.793 =================================================================================================================== 00:34:03.793 Total : 7948.07 31.05 0.00 0.00 16081.81 515.13 7033243.39 00:34:03.793 21:36:52 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:03.793 21:36:52 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:34:03.793 21:36:52 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:03.793 21:36:52 -- host/multipath.sh@125 -- # nvmftestfini 00:34:03.793 21:36:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:34:03.793 21:36:52 -- nvmf/common.sh@117 -- # sync 00:34:03.793 21:36:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:03.793 21:36:52 -- nvmf/common.sh@120 -- # set +e 00:34:03.793 21:36:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:03.793 21:36:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:03.793 rmmod nvme_tcp 00:34:03.793 rmmod nvme_fabrics 00:34:03.793 rmmod nvme_keyring 00:34:03.793 21:36:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:03.793 21:36:52 -- nvmf/common.sh@124 -- # set -e 00:34:03.793 21:36:52 -- nvmf/common.sh@125 -- # return 0 00:34:03.793 21:36:52 -- nvmf/common.sh@478 -- # '[' -n 106028 ']' 00:34:03.793 21:36:52 -- nvmf/common.sh@479 -- # killprocess 106028 00:34:03.793 21:36:52 -- common/autotest_common.sh@936 -- # '[' -z 106028 ']' 00:34:03.793 21:36:52 -- common/autotest_common.sh@940 -- # kill -0 106028 00:34:03.793 21:36:52 -- common/autotest_common.sh@941 -- # uname 00:34:03.793 21:36:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:03.793 21:36:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 106028 00:34:03.793 21:36:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:03.793 21:36:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:03.793 21:36:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 106028' 00:34:03.793 killing process with pid 106028 00:34:03.793 21:36:52 -- common/autotest_common.sh@955 -- # kill 106028 00:34:03.793 21:36:52 -- common/autotest_common.sh@960 -- # wait 106028 00:34:03.793 21:36:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:34:03.793 21:36:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:34:03.793 21:36:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:34:03.793 21:36:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:03.793 21:36:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:03.793 21:36:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.793 21:36:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:03.793 21:36:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:03.793 21:36:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:03.793 00:34:03.793 real 1m0.492s 00:34:03.793 user 2m54.352s 00:34:03.793 sys 0m10.887s 00:34:03.793 21:36:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:03.793 21:36:52 -- common/autotest_common.sh@10 -- # set +x 00:34:03.793 ************************************ 00:34:03.793 END TEST nvmf_multipath 00:34:03.793 ************************************ 00:34:03.793 21:36:53 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:34:03.793 21:36:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:34:03.793 21:36:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:03.793 21:36:53 -- common/autotest_common.sh@10 -- # set +x 00:34:04.053 ************************************ 00:34:04.053 START TEST nvmf_timeout 00:34:04.053 ************************************ 00:34:04.053 21:36:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:34:04.053 * Looking for test storage... 00:34:04.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:04.053 21:36:53 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:04.053 21:36:53 -- nvmf/common.sh@7 -- # uname -s 00:34:04.053 21:36:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.053 21:36:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.053 21:36:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.053 21:36:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.053 21:36:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.053 21:36:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.053 21:36:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.053 21:36:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.053 21:36:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.053 21:36:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.053 21:36:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:34:04.053 21:36:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:34:04.053 21:36:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.053 21:36:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.053 21:36:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:04.053 21:36:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.053 21:36:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:04.053 21:36:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.053 21:36:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.053 21:36:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.053 21:36:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.053 21:36:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.053 21:36:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.053 21:36:53 -- paths/export.sh@5 -- # export PATH 00:34:04.053 21:36:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.053 21:36:53 -- nvmf/common.sh@47 -- # : 0 00:34:04.053 21:36:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:04.053 21:36:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:04.053 21:36:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.053 21:36:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.053 21:36:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.053 21:36:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:04.053 21:36:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:04.053 21:36:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:04.053 21:36:53 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:04.053 21:36:53 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:04.053 21:36:53 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:04.053 21:36:53 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:34:04.053 21:36:53 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:04.053 21:36:53 -- host/timeout.sh@19 -- # nvmftestinit 00:34:04.053 21:36:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:34:04.053 21:36:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.053 21:36:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:34:04.053 21:36:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:34:04.053 21:36:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:34:04.053 21:36:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.053 21:36:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:04.053 21:36:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.053 21:36:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:34:04.053 21:36:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:34:04.053 21:36:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:34:04.053 21:36:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:34:04.053 21:36:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:34:04.053 21:36:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:34:04.053 21:36:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.053 21:36:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.053 21:36:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:04.053 21:36:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:04.053 21:36:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:04.053 21:36:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:04.053 21:36:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:04.053 21:36:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.053 21:36:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:04.053 21:36:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:04.053 21:36:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:04.053 21:36:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:04.053 21:36:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:04.053 21:36:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:04.313 Cannot find device "nvmf_tgt_br" 00:34:04.313 21:36:53 -- nvmf/common.sh@155 -- # true 00:34:04.313 21:36:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:04.313 Cannot find device "nvmf_tgt_br2" 00:34:04.313 21:36:53 -- nvmf/common.sh@156 -- # true 00:34:04.313 21:36:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:04.313 21:36:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:04.313 Cannot find device "nvmf_tgt_br" 00:34:04.313 21:36:53 -- nvmf/common.sh@158 -- # true 00:34:04.313 21:36:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:04.313 Cannot find device "nvmf_tgt_br2" 00:34:04.313 21:36:53 -- nvmf/common.sh@159 -- # true 00:34:04.313 21:36:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:04.313 21:36:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:04.313 21:36:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:04.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:04.313 21:36:53 -- nvmf/common.sh@162 -- # true 00:34:04.313 21:36:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:04.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:04.314 21:36:53 -- nvmf/common.sh@163 -- # true 00:34:04.314 21:36:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:04.314 21:36:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:04.314 21:36:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:04.314 21:36:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:04.314 21:36:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:04.314 21:36:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:04.314 21:36:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:04.314 21:36:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:04.314 21:36:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:04.314 21:36:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:04.314 21:36:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:04.314 21:36:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:04.314 21:36:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:04.314 21:36:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:04.314 21:36:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:04.314 21:36:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:04.314 21:36:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:04.314 21:36:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:04.314 21:36:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:04.314 21:36:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:04.314 21:36:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:04.314 21:36:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:04.314 21:36:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:04.314 21:36:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:04.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:34:04.314 00:34:04.314 --- 10.0.0.2 ping statistics --- 00:34:04.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.314 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:34:04.314 21:36:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:04.314 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:04.314 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:34:04.314 00:34:04.314 --- 10.0.0.3 ping statistics --- 00:34:04.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.314 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:34:04.314 21:36:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:04.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:34:04.573 00:34:04.573 --- 10.0.0.1 ping statistics --- 00:34:04.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.573 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:34:04.573 21:36:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.573 21:36:53 -- nvmf/common.sh@422 -- # return 0 00:34:04.573 21:36:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:34:04.573 21:36:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.573 21:36:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:34:04.573 21:36:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:34:04.573 21:36:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.573 21:36:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:34:04.573 21:36:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:34:04.573 21:36:53 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:34:04.573 21:36:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:34:04.573 21:36:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:04.573 21:36:53 -- common/autotest_common.sh@10 -- # set +x 00:34:04.573 21:36:53 -- nvmf/common.sh@470 -- # nvmfpid=107372 00:34:04.573 21:36:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:04.573 21:36:53 -- nvmf/common.sh@471 -- # waitforlisten 107372 00:34:04.573 21:36:53 -- common/autotest_common.sh@817 -- # '[' -z 107372 ']' 00:34:04.573 21:36:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.573 21:36:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:04.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.573 21:36:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.573 21:36:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:04.573 21:36:53 -- common/autotest_common.sh@10 -- # set +x 00:34:04.573 [2024-04-26 21:36:53.645795] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:04.573 [2024-04-26 21:36:53.645896] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.573 [2024-04-26 21:36:53.785402] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:04.831 [2024-04-26 21:36:53.838736] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.831 [2024-04-26 21:36:53.838793] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.831 [2024-04-26 21:36:53.838801] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.832 [2024-04-26 21:36:53.838807] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.832 [2024-04-26 21:36:53.838813] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.832 [2024-04-26 21:36:53.838928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.832 [2024-04-26 21:36:53.838931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.400 21:36:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:05.400 21:36:54 -- common/autotest_common.sh@850 -- # return 0 00:34:05.400 21:36:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:34:05.400 21:36:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:05.400 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:34:05.400 21:36:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.400 21:36:54 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:05.400 21:36:54 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:05.658 [2024-04-26 21:36:54.808025] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.658 21:36:54 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:05.916 Malloc0 00:34:05.916 21:36:55 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:06.175 21:36:55 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:06.436 21:36:55 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.706 [2024-04-26 21:36:55.857606] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.706 21:36:55 -- host/timeout.sh@32 -- # bdevperf_pid=107476 00:34:06.706 21:36:55 -- host/timeout.sh@34 -- # waitforlisten 107476 /var/tmp/bdevperf.sock 00:34:06.706 21:36:55 -- common/autotest_common.sh@817 -- # '[' -z 107476 ']' 00:34:06.706 21:36:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:06.706 21:36:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:06.707 21:36:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:06.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:06.707 21:36:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:06.707 21:36:55 -- common/autotest_common.sh@10 -- # set +x 00:34:06.707 21:36:55 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:34:06.707 [2024-04-26 21:36:55.943747] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:06.707 [2024-04-26 21:36:55.943848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107476 ] 00:34:06.964 [2024-04-26 21:36:56.070797] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.964 [2024-04-26 21:36:56.143712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:07.898 21:36:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:07.898 21:36:56 -- common/autotest_common.sh@850 -- # return 0 00:34:07.898 21:36:56 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:07.898 21:36:57 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:34:08.157 NVMe0n1 00:34:08.416 21:36:57 -- host/timeout.sh@51 -- # rpc_pid=107524 00:34:08.416 21:36:57 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:08.416 21:36:57 -- host/timeout.sh@53 -- # sleep 1 00:34:08.416 Running I/O for 10 seconds... 00:34:09.350 21:36:58 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:09.610 [2024-04-26 21:36:58.646922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.610 [2024-04-26 21:36:58.646992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.610 [2024-04-26 21:36:58.647000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.610 [2024-04-26 21:36:58.647006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.610 [2024-04-26 21:36:58.647012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.610 [2024-04-26 21:36:58.647018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.610 [2024-04-26 21:36:58.647024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.610 [2024-04-26 21:36:58.647031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.611 [2024-04-26 21:36:58.647037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.611 [2024-04-26 21:36:58.647042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.611 [2024-04-26 21:36:58.647048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.611 [2024-04-26 21:36:58.647054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.611 [2024-04-26 21:36:58.647060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.611 [2024-04-26 21:36:58.647065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141680 is same with the state(5) to be set 00:34:09.611 [2024-04-26 21:36:58.647280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.611 [2024-04-26 21:36:58.647342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.611 [2024-04-26 21:36:58.647377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.611 [2024-04-26 21:36:58.647393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.611 [2024-04-26 21:36:58.647408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.611 [2024-04-26 21:36:58.647423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.611 [2024-04-26 21:36:58.647437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.611 [2024-04-26 21:36:58.647452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.611 [2024-04-26 21:36:58.647928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.611 [2024-04-26 21:36:58.647941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.647954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.647961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.647970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.647977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.647985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.647991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.648006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.648026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.648041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.648055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.648069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.648089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.648103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:09.612 [2024-04-26 21:36:58.648386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.612 [2024-04-26 21:36:58.648584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.612 [2024-04-26 21:36:58.648591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.648993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.648999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.613 [2024-04-26 21:36:58.649235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.613 [2024-04-26 21:36:58.649243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:09.614 [2024-04-26 21:36:58.649399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e9f30 is same with the state(5) to be set 00:34:09.614 [2024-04-26 21:36:58.649415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:09.614 [2024-04-26 21:36:58.649420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:09.614 [2024-04-26 21:36:58.649426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81408 len:8 PRP1 0x0 PRP2 0x0 00:34:09.614 [2024-04-26 21:36:58.649436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:09.614 [2024-04-26 21:36:58.649486] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5e9f30 was disconnected and freed. reset controller. 00:34:09.614 [2024-04-26 21:36:58.649723] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.614 [2024-04-26 21:36:58.649818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cbc30 (9): Bad file descriptor 00:34:09.614 [2024-04-26 21:36:58.649904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.614 [2024-04-26 21:36:58.649934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.614 [2024-04-26 21:36:58.649951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5cbc30 with addr=10.0.0.2, port=4420 00:34:09.614 [2024-04-26 21:36:58.649963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cbc30 is same with the state(5) to be set 00:34:09.614 [2024-04-26 21:36:58.649976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cbc30 (9): Bad file descriptor 00:34:09.614 [2024-04-26 21:36:58.649987] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.614 [2024-04-26 21:36:58.649993] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.614 [2024-04-26 21:36:58.650001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.614 [2024-04-26 21:36:58.650022] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.614 [2024-04-26 21:36:58.650029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.614 21:36:58 -- host/timeout.sh@56 -- # sleep 2 00:34:11.510 [2024-04-26 21:37:00.646427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.510 [2024-04-26 21:37:00.646534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.510 [2024-04-26 21:37:00.646547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5cbc30 with addr=10.0.0.2, port=4420 00:34:11.510 [2024-04-26 21:37:00.646559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cbc30 is same with the state(5) to be set 00:34:11.510 [2024-04-26 21:37:00.646584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cbc30 (9): Bad file descriptor 00:34:11.510 [2024-04-26 21:37:00.646599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.510 [2024-04-26 21:37:00.646606] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.510 [2024-04-26 21:37:00.646614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.510 [2024-04-26 21:37:00.646637] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.510 [2024-04-26 21:37:00.646645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.510 21:37:00 -- host/timeout.sh@57 -- # get_controller 00:34:11.510 21:37:00 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:11.510 21:37:00 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:34:11.767 21:37:00 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:34:11.767 21:37:00 -- host/timeout.sh@58 -- # get_bdev 00:34:11.767 21:37:00 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:34:11.767 21:37:00 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:34:12.057 21:37:01 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:34:12.057 21:37:01 -- host/timeout.sh@61 -- # sleep 5 00:34:13.438 [2024-04-26 21:37:02.643024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.438 [2024-04-26 21:37:02.643119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:13.438 [2024-04-26 21:37:02.643134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5cbc30 with addr=10.0.0.2, port=4420 00:34:13.438 [2024-04-26 21:37:02.643146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cbc30 is same with the state(5) to be set 00:34:13.438 [2024-04-26 21:37:02.643173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5cbc30 (9): Bad file descriptor 00:34:13.438 [2024-04-26 21:37:02.643187] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:13.438 [2024-04-26 21:37:02.643194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:13.438 [2024-04-26 21:37:02.643202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:13.438 [2024-04-26 21:37:02.643226] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:13.438 [2024-04-26 21:37:02.643236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:15.972 [2024-04-26 21:37:04.639537] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:16.539 00:34:16.539 Latency(us) 00:34:16.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.539 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:16.539 Verification LBA range: start 0x0 length 0x4000 00:34:16.539 NVMe0n1 : 8.12 1242.62 4.85 15.76 0.00 101815.18 1888.81 7033243.39 00:34:16.539 =================================================================================================================== 00:34:16.539 Total : 1242.62 4.85 15.76 0.00 101815.18 1888.81 7033243.39 00:34:16.539 0 00:34:17.106 21:37:06 -- host/timeout.sh@62 -- # get_controller 00:34:17.106 21:37:06 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:17.106 21:37:06 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:34:17.365 21:37:06 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:34:17.365 21:37:06 -- host/timeout.sh@63 -- # get_bdev 00:34:17.365 21:37:06 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:34:17.365 21:37:06 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:34:17.623 21:37:06 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:34:17.623 21:37:06 -- host/timeout.sh@65 -- # wait 107524 00:34:17.623 21:37:06 -- host/timeout.sh@67 -- # killprocess 107476 00:34:17.623 21:37:06 -- common/autotest_common.sh@936 -- # '[' -z 107476 ']' 00:34:17.623 21:37:06 -- common/autotest_common.sh@940 -- # kill -0 107476 00:34:17.623 21:37:06 -- common/autotest_common.sh@941 -- # uname 00:34:17.623 21:37:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:17.623 21:37:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 107476 00:34:17.623 21:37:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:34:17.623 21:37:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:34:17.623 killing process with pid 107476 00:34:17.623 21:37:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 107476' 00:34:17.623 21:37:06 -- common/autotest_common.sh@955 -- # kill 107476 00:34:17.623 Received shutdown signal, test time was about 9.173080 seconds 00:34:17.623 00:34:17.623 Latency(us) 00:34:17.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:17.623 =================================================================================================================== 00:34:17.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:17.624 21:37:06 -- common/autotest_common.sh@960 -- # wait 107476 00:34:17.624 21:37:06 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:17.896 [2024-04-26 21:37:07.118295] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.173 21:37:07 -- host/timeout.sh@74 -- # bdevperf_pid=107676 00:34:18.173 21:37:07 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:34:18.173 21:37:07 -- host/timeout.sh@76 -- # waitforlisten 107676 /var/tmp/bdevperf.sock 00:34:18.173 21:37:07 -- common/autotest_common.sh@817 -- # '[' -z 107676 ']' 00:34:18.173 21:37:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:18.173 21:37:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:18.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:18.173 21:37:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:18.173 21:37:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:18.173 21:37:07 -- common/autotest_common.sh@10 -- # set +x 00:34:18.173 [2024-04-26 21:37:07.209269] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:18.173 [2024-04-26 21:37:07.209371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107676 ] 00:34:18.174 [2024-04-26 21:37:07.345187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.174 [2024-04-26 21:37:07.397151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.113 21:37:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:19.113 21:37:08 -- common/autotest_common.sh@850 -- # return 0 00:34:19.113 21:37:08 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:19.113 21:37:08 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:34:19.372 NVMe0n1 00:34:19.372 21:37:08 -- host/timeout.sh@84 -- # rpc_pid=107724 00:34:19.372 21:37:08 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:19.372 21:37:08 -- host/timeout.sh@86 -- # sleep 1 00:34:19.632 Running I/O for 10 seconds... 00:34:20.567 21:37:09 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.827 [2024-04-26 21:37:09.834774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834853] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834968] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.834996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.827 [2024-04-26 21:37:09.835265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835369] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22e2d40 is same with the state(5) to be set 00:34:20.828 [2024-04-26 21:37:09.835850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.835888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.835910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.835917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.835927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.835935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.835944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.835950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.835959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.835965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.835974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.835981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.835989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.835996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.836004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.836010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.836019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.836025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.836033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.836040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.836049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.836056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.836066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.836072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.828 [2024-04-26 21:37:09.836081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.828 [2024-04-26 21:37:09.836087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.829 [2024-04-26 21:37:09.836733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.829 [2024-04-26 21:37:09.836747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.836985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.836999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.830 [2024-04-26 21:37:09.837616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.830 [2024-04-26 21:37:09.837623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.837988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.837996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.831 [2024-04-26 21:37:09.838160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.831 [2024-04-26 21:37:09.838168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:20.832 [2024-04-26 21:37:09.838179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.832 [2024-04-26 21:37:09.838188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:20.832 [2024-04-26 21:37:09.838195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.832 [2024-04-26 21:37:09.838203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc5e10 is same with the state(5) to be set 00:34:20.832 [2024-04-26 21:37:09.838214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:20.832 [2024-04-26 21:37:09.838224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:20.832 [2024-04-26 21:37:09.838231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83688 len:8 PRP1 0x0 PRP2 0x0 00:34:20.832 [2024-04-26 21:37:09.838237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.832 [2024-04-26 21:37:09.838299] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cc5e10 was disconnected and freed. reset controller. 00:34:20.832 [2024-04-26 21:37:09.838564] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.832 [2024-04-26 21:37:09.838645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7c30 (9): Bad file descriptor 00:34:20.832 [2024-04-26 21:37:09.838736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.832 [2024-04-26 21:37:09.838770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.832 [2024-04-26 21:37:09.838781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca7c30 with addr=10.0.0.2, port=4420 00:34:20.832 [2024-04-26 21:37:09.838789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7c30 is same with the state(5) to be set 00:34:20.832 [2024-04-26 21:37:09.838802] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7c30 (9): Bad file descriptor 00:34:20.832 [2024-04-26 21:37:09.838829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:20.832 [2024-04-26 21:37:09.838837] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:20.832 [2024-04-26 21:37:09.838852] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:20.832 [2024-04-26 21:37:09.838869] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:20.832 [2024-04-26 21:37:09.838876] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:20.832 21:37:09 -- host/timeout.sh@90 -- # sleep 1 00:34:21.813 [2024-04-26 21:37:10.837078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.813 [2024-04-26 21:37:10.837163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.813 [2024-04-26 21:37:10.837175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca7c30 with addr=10.0.0.2, port=4420 00:34:21.813 [2024-04-26 21:37:10.837186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7c30 is same with the state(5) to be set 00:34:21.813 [2024-04-26 21:37:10.837208] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7c30 (9): Bad file descriptor 00:34:21.813 [2024-04-26 21:37:10.837222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:21.813 [2024-04-26 21:37:10.837229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:21.813 [2024-04-26 21:37:10.837237] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:21.814 [2024-04-26 21:37:10.837259] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:21.814 [2024-04-26 21:37:10.837268] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:21.814 21:37:10 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.070 [2024-04-26 21:37:11.129921] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.070 21:37:11 -- host/timeout.sh@92 -- # wait 107724 00:34:22.635 [2024-04-26 21:37:11.846564] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:30.770 00:34:30.770 Latency(us) 00:34:30.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.770 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:30.770 Verification LBA range: start 0x0 length 0x4000 00:34:30.770 NVMe0n1 : 10.02 6489.04 25.35 0.00 0.00 19703.68 1831.57 3033086.21 00:34:30.770 =================================================================================================================== 00:34:30.770 Total : 6489.04 25.35 0.00 0.00 19703.68 1831.57 3033086.21 00:34:30.770 0 00:34:30.770 21:37:18 -- host/timeout.sh@97 -- # rpc_pid=107841 00:34:30.770 21:37:18 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:30.770 21:37:18 -- host/timeout.sh@98 -- # sleep 1 00:34:30.770 Running I/O for 10 seconds... 00:34:30.770 21:37:19 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:30.770 [2024-04-26 21:37:19.945821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.945958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21399d0 is same with the state(5) to be set 00:34:30.770 [2024-04-26 21:37:19.947909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.948505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.948618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.948679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.948732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.948794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.948848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.948908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.948966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.949024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.949068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.949122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.949165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.949229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.949270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.949327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.949377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.949454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.949505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:30.770 [2024-04-26 21:37:19.949553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.949595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.949669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.949724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.949775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.949837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.949889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.949944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.950011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.950093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.950138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.950197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.950254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.950300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.950388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.950439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.950502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.950558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.950613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.950676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.950730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.770 [2024-04-26 21:37:19.950786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.770 [2024-04-26 21:37:19.950858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.950915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.950972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.951024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.951082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.951130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.951179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.951232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.951291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.951377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.951441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.951493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.951536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.951581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.951647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.951704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.951761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.951813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.951868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.951923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.951970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.952070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.952178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.952295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.952405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.952512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.952606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.952703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.952811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.952908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.952958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.953959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.953998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.954050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.954099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.954156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.954201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.954259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.954298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.954354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.954406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.954473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.954528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.954591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.954637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.954681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.771 [2024-04-26 21:37:19.954730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:30.771 [2024-04-26 21:37:19.954769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.954861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.954926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80512 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.954965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.955011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.955059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.955111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80520 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.955166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.955209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.955262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.955299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80528 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.955377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.955420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.955460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.955510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80536 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.955555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.955598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.955637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.955685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80544 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.955745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.955789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.955841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.955878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80552 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.955930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.955976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.956017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.956061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80560 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.956110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.956157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.956197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.956256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80568 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.956311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.956356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.956432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.956469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80576 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.956522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.956565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.956605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.956645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80584 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.956700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.956744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.956784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.956832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80592 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.956899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.956937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.956987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.957023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80600 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.957078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.957120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.957168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.957217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80608 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.957275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.957312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.957370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.957409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80616 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.957467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.957506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.957554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.957595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80624 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.957644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.957702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.957744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.957801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80632 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.957858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.957897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.957938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.957981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80640 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.958041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.958097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.958141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.958183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80648 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.958235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.958281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.958328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.958382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80656 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.958444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.958486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.958545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.958588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80664 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.958652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.958693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.958739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.958786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80672 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.958839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.958884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.958937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.958980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80680 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.959034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.959080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.772 [2024-04-26 21:37:19.959125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.772 [2024-04-26 21:37:19.959183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80688 len:8 PRP1 0x0 PRP2 0x0 00:34:30.772 [2024-04-26 21:37:19.959227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.772 [2024-04-26 21:37:19.959269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.959308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.959355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80696 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.959406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.959453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.959502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.959549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80704 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.959599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.959649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.959695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.959736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80712 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.959793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.959831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.959871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.959910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80720 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.959963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.960005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.960050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.960086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80728 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.960138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.960229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.960273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.960320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80736 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.960368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.960413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.960470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.960508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80744 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.960565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.960608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.960649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.960697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80752 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.960739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.960781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.960821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.960859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.960910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.960948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.960987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.961026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80768 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.961080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.961122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.961162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.961203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80776 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.961255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.961308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.961368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.961421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80784 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.961465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.961512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.961557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.961603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80792 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.961650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.961697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.961739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.961785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.961841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.961879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.961915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.961961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80808 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.962011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.962058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.962099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.962140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80816 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.962188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.962236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.962278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.962328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80824 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.962401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.962439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.962480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.962519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80832 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.962564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.962612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.962654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.962695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.962733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.962776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.962816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.773 [2024-04-26 21:37:19.962867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:34:30.773 [2024-04-26 21:37:19.962912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.773 [2024-04-26 21:37:19.962960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.773 [2024-04-26 21:37:19.963009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.963051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.963105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.963162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.963215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.963259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.963312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.963362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.963404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.963445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.963502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.963551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.963592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.963629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.963684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.963728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.963777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.963816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.963865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.963911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.963953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.963993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80952 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80968 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80976 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:34:30.774 [2024-04-26 21:37:19.964650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.774 [2024-04-26 21:37:19.964657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.774 [2024-04-26 21:37:19.964666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.774 [2024-04-26 21:37:19.964672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81000 len:8 PRP1 0x0 PRP2 0x0 00:34:30.775 [2024-04-26 21:37:19.964679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.964686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.775 [2024-04-26 21:37:19.964704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.775 [2024-04-26 21:37:19.964709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81008 len:8 PRP1 0x0 PRP2 0x0 00:34:30.775 [2024-04-26 21:37:19.964729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.964739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.775 [2024-04-26 21:37:19.964744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.775 [2024-04-26 21:37:19.964755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81016 len:8 PRP1 0x0 PRP2 0x0 00:34:30.775 [2024-04-26 21:37:19.964762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.964769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.775 [2024-04-26 21:37:19.964786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.775 [2024-04-26 21:37:19.964798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81024 len:8 PRP1 0x0 PRP2 0x0 00:34:30.775 [2024-04-26 21:37:19.964809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.964817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.775 [2024-04-26 21:37:19.964822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.775 [2024-04-26 21:37:19.964827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81032 len:8 PRP1 0x0 PRP2 0x0 00:34:30.775 [2024-04-26 21:37:19.964843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.964854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.775 [2024-04-26 21:37:19.964859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.775 [2024-04-26 21:37:19.964865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:34:30.775 [2024-04-26 21:37:19.964884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.964901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.775 [2024-04-26 21:37:19.964906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.775 [2024-04-26 21:37:19.964912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81048 len:8 PRP1 0x0 PRP2 0x0 00:34:30.775 [2024-04-26 21:37:19.964917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.964935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.775 [2024-04-26 21:37:19.964944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.775 [2024-04-26 21:37:19.964950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81056 len:8 PRP1 0x0 PRP2 0x0 00:34:30.775 [2024-04-26 21:37:19.964956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.964977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:30.775 [2024-04-26 21:37:19.964987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:30.775 [2024-04-26 21:37:19.964997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81064 len:8 PRP1 0x0 PRP2 0x0 00:34:30.775 [2024-04-26 21:37:19.965004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.965103] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cd57a0 was disconnected and freed. reset controller. 00:34:30.775 [2024-04-26 21:37:19.965239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.775 [2024-04-26 21:37:19.965257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.965267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.775 [2024-04-26 21:37:19.965273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.965283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.775 [2024-04-26 21:37:19.965290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.965297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:30.775 [2024-04-26 21:37:19.965303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:30.775 [2024-04-26 21:37:19.965310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7c30 is same with the state(5) to be set 00:34:30.775 [2024-04-26 21:37:19.965539] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.775 [2024-04-26 21:37:19.965565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7c30 (9): Bad file descriptor 00:34:30.775 [2024-04-26 21:37:19.965652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.775 [2024-04-26 21:37:19.965688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.775 [2024-04-26 21:37:19.965698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca7c30 with addr=10.0.0.2, port=4420 00:34:30.775 [2024-04-26 21:37:19.965720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7c30 is same with the state(5) to be set 00:34:30.775 [2024-04-26 21:37:19.965743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7c30 (9): Bad file descriptor 00:34:30.775 [2024-04-26 21:37:19.965762] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:30.775 [2024-04-26 21:37:19.965774] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:30.775 [2024-04-26 21:37:19.965783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:30.775 [2024-04-26 21:37:19.965805] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:30.775 [2024-04-26 21:37:19.965813] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:30.775 21:37:19 -- host/timeout.sh@101 -- # sleep 3 00:34:32.152 [2024-04-26 21:37:20.964005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.152 [2024-04-26 21:37:20.964085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.152 [2024-04-26 21:37:20.964097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca7c30 with addr=10.0.0.2, port=4420 00:34:32.152 [2024-04-26 21:37:20.964108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7c30 is same with the state(5) to be set 00:34:32.152 [2024-04-26 21:37:20.964126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7c30 (9): Bad file descriptor 00:34:32.152 [2024-04-26 21:37:20.964139] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.152 [2024-04-26 21:37:20.964145] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.152 [2024-04-26 21:37:20.964155] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.152 [2024-04-26 21:37:20.964176] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.152 [2024-04-26 21:37:20.964184] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:32.718 [2024-04-26 21:37:21.962396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.718 [2024-04-26 21:37:21.962493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.718 [2024-04-26 21:37:21.962505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca7c30 with addr=10.0.0.2, port=4420 00:34:32.718 [2024-04-26 21:37:21.962515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7c30 is same with the state(5) to be set 00:34:32.718 [2024-04-26 21:37:21.962536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7c30 (9): Bad file descriptor 00:34:32.718 [2024-04-26 21:37:21.962551] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:32.718 [2024-04-26 21:37:21.962559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:32.718 [2024-04-26 21:37:21.962567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:32.718 [2024-04-26 21:37:21.962591] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:32.718 [2024-04-26 21:37:21.962599] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.096 [2024-04-26 21:37:22.963510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.096 [2024-04-26 21:37:22.963599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:34.096 [2024-04-26 21:37:22.963611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca7c30 with addr=10.0.0.2, port=4420 00:34:34.096 [2024-04-26 21:37:22.963622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7c30 is same with the state(5) to be set 00:34:34.096 [2024-04-26 21:37:22.963832] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca7c30 (9): Bad file descriptor 00:34:34.096 [2024-04-26 21:37:22.964062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:34.096 [2024-04-26 21:37:22.964080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:34.096 [2024-04-26 21:37:22.964088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:34.096 [2024-04-26 21:37:22.967308] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:34.096 [2024-04-26 21:37:22.967347] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:34.096 21:37:22 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:34.096 [2024-04-26 21:37:23.234079] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.096 21:37:23 -- host/timeout.sh@103 -- # wait 107841 00:34:35.032 [2024-04-26 21:37:23.998398] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:40.310 00:34:40.310 Latency(us) 00:34:40.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.310 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:40.310 Verification LBA range: start 0x0 length 0x4000 00:34:40.310 NVMe0n1 : 10.01 5527.09 21.59 4272.99 0.00 13039.44 547.33 3033086.21 00:34:40.310 =================================================================================================================== 00:34:40.310 Total : 5527.09 21.59 4272.99 0.00 13039.44 0.00 3033086.21 00:34:40.310 0 00:34:40.310 21:37:28 -- host/timeout.sh@105 -- # killprocess 107676 00:34:40.310 21:37:28 -- common/autotest_common.sh@936 -- # '[' -z 107676 ']' 00:34:40.310 21:37:28 -- common/autotest_common.sh@940 -- # kill -0 107676 00:34:40.310 21:37:28 -- common/autotest_common.sh@941 -- # uname 00:34:40.310 21:37:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:40.310 21:37:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 107676 00:34:40.311 21:37:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:34:40.311 21:37:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:34:40.311 killing process with pid 107676 00:34:40.311 21:37:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 107676' 00:34:40.311 21:37:28 -- common/autotest_common.sh@955 -- # kill 107676 00:34:40.311 Received shutdown signal, test time was about 10.000000 seconds 00:34:40.311 00:34:40.311 Latency(us) 00:34:40.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.311 =================================================================================================================== 00:34:40.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:40.311 21:37:28 -- common/autotest_common.sh@960 -- # wait 107676 00:34:40.311 21:37:29 -- host/timeout.sh@110 -- # bdevperf_pid=107964 00:34:40.311 21:37:29 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:34:40.311 21:37:29 -- host/timeout.sh@112 -- # waitforlisten 107964 /var/tmp/bdevperf.sock 00:34:40.311 21:37:29 -- common/autotest_common.sh@817 -- # '[' -z 107964 ']' 00:34:40.311 21:37:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:40.311 21:37:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:40.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:40.311 21:37:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:40.311 21:37:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:40.311 21:37:29 -- common/autotest_common.sh@10 -- # set +x 00:34:40.311 [2024-04-26 21:37:29.138883] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:40.311 [2024-04-26 21:37:29.138959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107964 ] 00:34:40.311 [2024-04-26 21:37:29.262730] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.311 [2024-04-26 21:37:29.330632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:40.878 21:37:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:40.878 21:37:30 -- common/autotest_common.sh@850 -- # return 0 00:34:40.878 21:37:30 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 107964 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:34:40.878 21:37:30 -- host/timeout.sh@116 -- # dtrace_pid=107992 00:34:40.878 21:37:30 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:34:41.137 21:37:30 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:34:41.396 NVMe0n1 00:34:41.396 21:37:30 -- host/timeout.sh@124 -- # rpc_pid=108040 00:34:41.396 21:37:30 -- host/timeout.sh@125 -- # sleep 1 00:34:41.396 21:37:30 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:41.655 Running I/O for 10 seconds... 00:34:42.592 21:37:31 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:42.592 [2024-04-26 21:37:31.798194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.592 [2024-04-26 21:37:31.798345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798557] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798766] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798828] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.798993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.593 [2024-04-26 21:37:31.799096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799115] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23931f0 is same with the state(5) to be set 00:34:42.594 [2024-04-26 21:37:31.799655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.799963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.799987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:129176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.594 [2024-04-26 21:37:31.800164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.594 [2024-04-26 21:37:31.800173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.595 [2024-04-26 21:37:31.800860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.595 [2024-04-26 21:37:31.800868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.800879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.800887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.800895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.800904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.800911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.800919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.800926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.800939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.800946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.800955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.800965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.800974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.800981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.800990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.596 [2024-04-26 21:37:31.801625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.596 [2024-04-26 21:37:31.801633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:42.597 [2024-04-26 21:37:31.801978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.801985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15f8f30 is same with the state(5) to be set 00:34:42.597 [2024-04-26 21:37:31.801994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:42.597 [2024-04-26 21:37:31.801999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:42.597 [2024-04-26 21:37:31.802005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93048 len:8 PRP1 0x0 PRP2 0x0 00:34:42.597 [2024-04-26 21:37:31.802016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:42.597 [2024-04-26 21:37:31.802076] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15f8f30 was disconnected and freed. reset controller. 00:34:42.597 [2024-04-26 21:37:31.802348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.597 [2024-04-26 21:37:31.802433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dac30 (9): Bad file descriptor 00:34:42.597 [2024-04-26 21:37:31.802527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.597 [2024-04-26 21:37:31.802566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.597 [2024-04-26 21:37:31.802577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15dac30 with addr=10.0.0.2, port=4420 00:34:42.597 [2024-04-26 21:37:31.802584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dac30 is same with the state(5) to be set 00:34:42.597 [2024-04-26 21:37:31.802596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dac30 (9): Bad file descriptor 00:34:42.597 [2024-04-26 21:37:31.802608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:42.597 [2024-04-26 21:37:31.802615] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:42.597 [2024-04-26 21:37:31.802622] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:42.597 [2024-04-26 21:37:31.802639] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:42.597 [2024-04-26 21:37:31.802646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:42.597 21:37:31 -- host/timeout.sh@128 -- # wait 108040 00:34:45.133 [2024-04-26 21:37:33.799021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.133 [2024-04-26 21:37:33.799104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.133 [2024-04-26 21:37:33.799118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15dac30 with addr=10.0.0.2, port=4420 00:34:45.133 [2024-04-26 21:37:33.799128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dac30 is same with the state(5) to be set 00:34:45.133 [2024-04-26 21:37:33.799149] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dac30 (9): Bad file descriptor 00:34:45.133 [2024-04-26 21:37:33.799163] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:45.133 [2024-04-26 21:37:33.799169] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:45.133 [2024-04-26 21:37:33.799177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:45.133 [2024-04-26 21:37:33.799200] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:45.133 [2024-04-26 21:37:33.799208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:47.037 [2024-04-26 21:37:35.795588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.037 [2024-04-26 21:37:35.795670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.037 [2024-04-26 21:37:35.795682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15dac30 with addr=10.0.0.2, port=4420 00:34:47.037 [2024-04-26 21:37:35.795694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15dac30 is same with the state(5) to be set 00:34:47.037 [2024-04-26 21:37:35.795714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15dac30 (9): Bad file descriptor 00:34:47.037 [2024-04-26 21:37:35.795727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:47.037 [2024-04-26 21:37:35.795733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:47.037 [2024-04-26 21:37:35.795741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:47.037 [2024-04-26 21:37:35.795765] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:47.037 [2024-04-26 21:37:35.795773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:48.942 [2024-04-26 21:37:37.792059] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:49.878 00:34:49.878 Latency(us) 00:34:49.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.878 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:34:49.878 NVMe0n1 : 8.12 2497.41 9.76 15.77 0.00 50965.37 2260.85 7033243.39 00:34:49.878 =================================================================================================================== 00:34:49.878 Total : 2497.41 9.76 15.77 0.00 50965.37 2260.85 7033243.39 00:34:49.878 0 00:34:49.878 21:37:38 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:49.878 Attaching 5 probes... 00:34:49.878 1213.666326: reset bdev controller NVMe0 00:34:49.878 1213.798858: reconnect bdev controller NVMe0 00:34:49.878 3210.220404: reconnect delay bdev controller NVMe0 00:34:49.878 3210.245790: reconnect bdev controller NVMe0 00:34:49.878 5206.802410: reconnect delay bdev controller NVMe0 00:34:49.878 5206.822797: reconnect bdev controller NVMe0 00:34:49.878 7203.350252: reconnect delay bdev controller NVMe0 00:34:49.878 7203.374488: reconnect bdev controller NVMe0 00:34:49.878 21:37:38 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:34:49.878 21:37:38 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:34:49.878 21:37:38 -- host/timeout.sh@136 -- # kill 107992 00:34:49.878 21:37:38 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:49.878 21:37:38 -- host/timeout.sh@139 -- # killprocess 107964 00:34:49.878 21:37:38 -- common/autotest_common.sh@936 -- # '[' -z 107964 ']' 00:34:49.878 21:37:38 -- common/autotest_common.sh@940 -- # kill -0 107964 00:34:49.878 21:37:38 -- common/autotest_common.sh@941 -- # uname 00:34:49.878 21:37:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:49.878 21:37:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 107964 00:34:49.878 killing process with pid 107964 00:34:49.878 Received shutdown signal, test time was about 8.195718 seconds 00:34:49.878 00:34:49.878 Latency(us) 00:34:49.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.878 =================================================================================================================== 00:34:49.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:49.878 21:37:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:34:49.878 21:37:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:34:49.878 21:37:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 107964' 00:34:49.878 21:37:38 -- common/autotest_common.sh@955 -- # kill 107964 00:34:49.878 21:37:38 -- common/autotest_common.sh@960 -- # wait 107964 00:34:49.878 21:37:39 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.137 21:37:39 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:34:50.137 21:37:39 -- host/timeout.sh@145 -- # nvmftestfini 00:34:50.137 21:37:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:34:50.137 21:37:39 -- nvmf/common.sh@117 -- # sync 00:34:50.137 21:37:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:50.137 21:37:39 -- nvmf/common.sh@120 -- # set +e 00:34:50.137 21:37:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:50.137 21:37:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:50.137 rmmod nvme_tcp 00:34:50.137 rmmod nvme_fabrics 00:34:50.395 rmmod nvme_keyring 00:34:50.395 21:37:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:50.395 21:37:39 -- nvmf/common.sh@124 -- # set -e 00:34:50.395 21:37:39 -- nvmf/common.sh@125 -- # return 0 00:34:50.395 21:37:39 -- nvmf/common.sh@478 -- # '[' -n 107372 ']' 00:34:50.395 21:37:39 -- nvmf/common.sh@479 -- # killprocess 107372 00:34:50.395 21:37:39 -- common/autotest_common.sh@936 -- # '[' -z 107372 ']' 00:34:50.395 21:37:39 -- common/autotest_common.sh@940 -- # kill -0 107372 00:34:50.395 21:37:39 -- common/autotest_common.sh@941 -- # uname 00:34:50.395 21:37:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:34:50.395 21:37:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 107372 00:34:50.395 killing process with pid 107372 00:34:50.395 21:37:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:34:50.395 21:37:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:34:50.395 21:37:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 107372' 00:34:50.395 21:37:39 -- common/autotest_common.sh@955 -- # kill 107372 00:34:50.395 21:37:39 -- common/autotest_common.sh@960 -- # wait 107372 00:34:50.654 21:37:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:34:50.654 21:37:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:34:50.654 21:37:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:34:50.654 21:37:39 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:50.654 21:37:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:50.654 21:37:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.654 21:37:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:50.654 21:37:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:50.654 21:37:39 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:50.654 00:34:50.654 real 0m46.631s 00:34:50.654 user 2m17.589s 00:34:50.654 sys 0m4.420s 00:34:50.654 21:37:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:50.654 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:34:50.655 ************************************ 00:34:50.655 END TEST nvmf_timeout 00:34:50.655 ************************************ 00:34:50.655 21:37:39 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:34:50.655 21:37:39 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:34:50.655 21:37:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:50.655 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:34:50.655 21:37:39 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:34:50.655 00:34:50.655 real 18m1.076s 00:34:50.655 user 55m25.806s 00:34:50.655 sys 3m23.865s 00:34:50.655 21:37:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:34:50.655 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:34:50.655 ************************************ 00:34:50.655 END TEST nvmf_tcp 00:34:50.655 ************************************ 00:34:50.655 21:37:39 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:34:50.655 21:37:39 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:50.655 21:37:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:34:50.655 21:37:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:34:50.655 21:37:39 -- common/autotest_common.sh@10 -- # set +x 00:34:50.916 ************************************ 00:34:50.916 START TEST spdkcli_nvmf_tcp 00:34:50.916 ************************************ 00:34:50.916 21:37:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:50.916 * Looking for test storage... 00:34:50.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:34:50.916 21:37:40 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:34:50.916 21:37:40 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:34:50.916 21:37:40 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:34:50.916 21:37:40 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:50.916 21:37:40 -- nvmf/common.sh@7 -- # uname -s 00:34:50.916 21:37:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:50.916 21:37:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:50.916 21:37:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:50.916 21:37:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:50.916 21:37:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:50.916 21:37:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:50.916 21:37:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:50.916 21:37:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:50.916 21:37:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:50.916 21:37:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:50.916 21:37:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:34:50.916 21:37:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:34:50.916 21:37:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:50.916 21:37:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:50.916 21:37:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:50.916 21:37:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:50.916 21:37:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:50.916 21:37:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:50.916 21:37:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:50.916 21:37:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:50.916 21:37:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.916 21:37:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.916 21:37:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.916 21:37:40 -- paths/export.sh@5 -- # export PATH 00:34:50.916 21:37:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.916 21:37:40 -- nvmf/common.sh@47 -- # : 0 00:34:50.916 21:37:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:50.916 21:37:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:50.916 21:37:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:50.916 21:37:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:50.916 21:37:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:50.916 21:37:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:50.916 21:37:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:50.916 21:37:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:50.916 21:37:40 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:50.916 21:37:40 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:50.916 21:37:40 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:50.916 21:37:40 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:50.916 21:37:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:50.916 21:37:40 -- common/autotest_common.sh@10 -- # set +x 00:34:50.916 21:37:40 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:50.916 21:37:40 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=108268 00:34:50.916 21:37:40 -- spdkcli/common.sh@34 -- # waitforlisten 108268 00:34:51.176 21:37:40 -- common/autotest_common.sh@817 -- # '[' -z 108268 ']' 00:34:51.176 21:37:40 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:51.176 21:37:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.176 21:37:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:34:51.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.176 21:37:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.176 21:37:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:34:51.176 21:37:40 -- common/autotest_common.sh@10 -- # set +x 00:34:51.176 [2024-04-26 21:37:40.217876] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:34:51.176 [2024-04-26 21:37:40.217953] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108268 ] 00:34:51.176 [2024-04-26 21:37:40.355974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:51.176 [2024-04-26 21:37:40.410282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.176 [2024-04-26 21:37:40.410292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.112 21:37:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:34:52.112 21:37:41 -- common/autotest_common.sh@850 -- # return 0 00:34:52.112 21:37:41 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:52.112 21:37:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:34:52.112 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:34:52.112 21:37:41 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:52.112 21:37:41 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:52.112 21:37:41 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:52.112 21:37:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:34:52.112 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:34:52.112 21:37:41 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:52.112 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:52.112 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:52.112 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:52.112 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:52.112 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:52.112 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:52.112 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:52.112 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:52.112 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:52.112 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:52.112 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:52.112 ' 00:34:52.370 [2024-04-26 21:37:41.586746] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:54.898 [2024-04-26 21:37:43.923598] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:56.272 [2024-04-26 21:37:45.258106] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:58.804 [2024-04-26 21:37:47.711388] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:00.714 [2024-04-26 21:37:49.777237] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:02.616 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:02.616 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:02.616 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:02.616 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:02.616 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:02.616 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:02.616 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:02.616 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.616 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.616 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:02.616 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:02.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:02.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:02.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:02.617 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:02.617 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:02.617 21:37:51 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:02.617 21:37:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:02.617 21:37:51 -- common/autotest_common.sh@10 -- # set +x 00:35:02.617 21:37:51 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:02.617 21:37:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:02.617 21:37:51 -- common/autotest_common.sh@10 -- # set +x 00:35:02.617 21:37:51 -- spdkcli/nvmf.sh@69 -- # check_match 00:35:02.617 21:37:51 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:35:02.875 21:37:51 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:02.875 21:37:52 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:02.875 21:37:52 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:02.875 21:37:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:02.875 21:37:52 -- common/autotest_common.sh@10 -- # set +x 00:35:02.875 21:37:52 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:02.875 21:37:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:02.875 21:37:52 -- common/autotest_common.sh@10 -- # set +x 00:35:02.875 21:37:52 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:02.875 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:02.875 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:02.875 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:02.875 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:02.875 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:02.875 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:02.875 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:02.875 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:02.875 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:02.876 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:02.876 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:02.876 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:02.876 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:02.876 ' 00:35:09.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:09.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:09.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:09.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:09.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:09.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:09.433 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:09.433 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:09.433 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:09.433 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:09.433 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:09.433 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:09.433 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:09.433 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:09.433 21:37:57 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:09.433 21:37:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:09.433 21:37:57 -- common/autotest_common.sh@10 -- # set +x 00:35:09.433 21:37:57 -- spdkcli/nvmf.sh@90 -- # killprocess 108268 00:35:09.433 21:37:57 -- common/autotest_common.sh@936 -- # '[' -z 108268 ']' 00:35:09.433 21:37:57 -- common/autotest_common.sh@940 -- # kill -0 108268 00:35:09.433 21:37:57 -- common/autotest_common.sh@941 -- # uname 00:35:09.433 21:37:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:09.433 21:37:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108268 00:35:09.433 21:37:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:09.433 21:37:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:09.433 killing process with pid 108268 00:35:09.433 21:37:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108268' 00:35:09.433 21:37:57 -- common/autotest_common.sh@955 -- # kill 108268 00:35:09.433 [2024-04-26 21:37:57.589006] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:35:09.433 21:37:57 -- common/autotest_common.sh@960 -- # wait 108268 00:35:09.433 21:37:57 -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:09.433 21:37:57 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:09.433 21:37:57 -- spdkcli/common.sh@13 -- # '[' -n 108268 ']' 00:35:09.433 21:37:57 -- spdkcli/common.sh@14 -- # killprocess 108268 00:35:09.433 21:37:57 -- common/autotest_common.sh@936 -- # '[' -z 108268 ']' 00:35:09.433 21:37:57 -- common/autotest_common.sh@940 -- # kill -0 108268 00:35:09.433 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (108268) - No such process 00:35:09.433 Process with pid 108268 is not found 00:35:09.433 21:37:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 108268 is not found' 00:35:09.433 21:37:57 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:09.433 21:37:57 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:09.433 21:37:57 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:09.433 00:35:09.433 real 0m17.794s 00:35:09.433 user 0m38.976s 00:35:09.433 sys 0m1.008s 00:35:09.433 21:37:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:09.433 21:37:57 -- common/autotest_common.sh@10 -- # set +x 00:35:09.433 ************************************ 00:35:09.433 END TEST spdkcli_nvmf_tcp 00:35:09.433 ************************************ 00:35:09.433 21:37:57 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:09.433 21:37:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:35:09.433 21:37:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:09.433 21:37:57 -- common/autotest_common.sh@10 -- # set +x 00:35:09.433 ************************************ 00:35:09.433 START TEST nvmf_identify_passthru 00:35:09.433 ************************************ 00:35:09.433 21:37:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:09.433 * Looking for test storage... 00:35:09.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:09.433 21:37:58 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:09.433 21:37:58 -- nvmf/common.sh@7 -- # uname -s 00:35:09.433 21:37:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.433 21:37:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.433 21:37:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.433 21:37:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.433 21:37:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:09.433 21:37:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:09.433 21:37:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.433 21:37:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:09.433 21:37:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.433 21:37:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:09.434 21:37:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:35:09.434 21:37:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:35:09.434 21:37:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.434 21:37:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:09.434 21:37:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:09.434 21:37:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.434 21:37:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:09.434 21:37:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.434 21:37:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.434 21:37:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.434 21:37:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.434 21:37:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.434 21:37:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.434 21:37:58 -- paths/export.sh@5 -- # export PATH 00:35:09.434 21:37:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.434 21:37:58 -- nvmf/common.sh@47 -- # : 0 00:35:09.434 21:37:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:09.434 21:37:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:09.434 21:37:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:09.434 21:37:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.434 21:37:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.434 21:37:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:09.434 21:37:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:09.434 21:37:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:09.434 21:37:58 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:09.434 21:37:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.434 21:37:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.434 21:37:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.434 21:37:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.434 21:37:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.434 21:37:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.434 21:37:58 -- paths/export.sh@5 -- # export PATH 00:35:09.434 21:37:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.434 21:37:58 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:09.434 21:37:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:35:09.434 21:37:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.434 21:37:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:35:09.434 21:37:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:35:09.434 21:37:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:35:09.434 21:37:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.434 21:37:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:09.434 21:37:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:09.434 21:37:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:35:09.434 21:37:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:35:09.434 21:37:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:35:09.434 21:37:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:35:09.434 21:37:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:35:09.434 21:37:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:35:09.434 21:37:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:09.434 21:37:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:09.434 21:37:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:09.434 21:37:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:09.434 21:37:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:09.434 21:37:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:09.434 21:37:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:09.434 21:37:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:09.434 21:37:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:09.434 21:37:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:09.434 21:37:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:09.434 21:37:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:09.434 21:37:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:09.434 21:37:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:09.434 Cannot find device "nvmf_tgt_br" 00:35:09.434 21:37:58 -- nvmf/common.sh@155 -- # true 00:35:09.434 21:37:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:09.434 Cannot find device "nvmf_tgt_br2" 00:35:09.434 21:37:58 -- nvmf/common.sh@156 -- # true 00:35:09.434 21:37:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:09.434 21:37:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:09.434 Cannot find device "nvmf_tgt_br" 00:35:09.434 21:37:58 -- nvmf/common.sh@158 -- # true 00:35:09.435 21:37:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:09.435 Cannot find device "nvmf_tgt_br2" 00:35:09.435 21:37:58 -- nvmf/common.sh@159 -- # true 00:35:09.435 21:37:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:09.435 21:37:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:09.435 21:37:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:09.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:09.435 21:37:58 -- nvmf/common.sh@162 -- # true 00:35:09.435 21:37:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:09.435 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:09.435 21:37:58 -- nvmf/common.sh@163 -- # true 00:35:09.435 21:37:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:09.435 21:37:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:09.435 21:37:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:09.435 21:37:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:09.435 21:37:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:09.435 21:37:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:09.435 21:37:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:09.435 21:37:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:09.435 21:37:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:09.435 21:37:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:09.435 21:37:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:09.435 21:37:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:09.435 21:37:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:09.435 21:37:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:09.435 21:37:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:09.435 21:37:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:09.435 21:37:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:09.435 21:37:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:09.435 21:37:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:09.435 21:37:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:09.435 21:37:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:09.435 21:37:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:09.435 21:37:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:09.435 21:37:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:09.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:09.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:35:09.435 00:35:09.435 --- 10.0.0.2 ping statistics --- 00:35:09.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.435 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:35:09.435 21:37:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:09.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:09.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:35:09.435 00:35:09.435 --- 10.0.0.3 ping statistics --- 00:35:09.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.435 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:35:09.435 21:37:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:09.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:09.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:35:09.435 00:35:09.435 --- 10.0.0.1 ping statistics --- 00:35:09.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:09.435 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:35:09.435 21:37:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:09.435 21:37:58 -- nvmf/common.sh@422 -- # return 0 00:35:09.435 21:37:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:35:09.435 21:37:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:09.435 21:37:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:35:09.435 21:37:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:35:09.435 21:37:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:09.435 21:37:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:35:09.435 21:37:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:35:09.435 21:37:58 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:09.435 21:37:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:09.435 21:37:58 -- common/autotest_common.sh@10 -- # set +x 00:35:09.435 21:37:58 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:09.435 21:37:58 -- common/autotest_common.sh@1510 -- # bdfs=() 00:35:09.435 21:37:58 -- common/autotest_common.sh@1510 -- # local bdfs 00:35:09.435 21:37:58 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:35:09.435 21:37:58 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:35:09.435 21:37:58 -- common/autotest_common.sh@1499 -- # bdfs=() 00:35:09.435 21:37:58 -- common/autotest_common.sh@1499 -- # local bdfs 00:35:09.435 21:37:58 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:09.435 21:37:58 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:09.435 21:37:58 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:35:09.435 21:37:58 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:35:09.435 21:37:58 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:35:09.435 21:37:58 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:35:09.435 21:37:58 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:35:09.435 21:37:58 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:35:09.435 21:37:58 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:35:09.435 21:37:58 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:09.435 21:37:58 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:09.435 21:37:58 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:35:09.435 21:37:58 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:35:09.435 21:37:58 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:09.435 21:37:58 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:09.694 21:37:58 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:35:09.694 21:37:58 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:09.694 21:37:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:09.694 21:37:58 -- common/autotest_common.sh@10 -- # set +x 00:35:09.694 21:37:58 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:09.694 21:37:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:09.694 21:37:58 -- common/autotest_common.sh@10 -- # set +x 00:35:09.694 21:37:58 -- target/identify_passthru.sh@31 -- # nvmfpid=108774 00:35:09.694 21:37:58 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:09.694 21:37:58 -- target/identify_passthru.sh@35 -- # waitforlisten 108774 00:35:09.694 21:37:58 -- common/autotest_common.sh@817 -- # '[' -z 108774 ']' 00:35:09.694 21:37:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.694 21:37:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:09.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.694 21:37:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.694 21:37:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:09.694 21:37:58 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:09.694 21:37:58 -- common/autotest_common.sh@10 -- # set +x 00:35:09.952 [2024-04-26 21:37:58.954795] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:35:09.952 [2024-04-26 21:37:58.954867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:09.952 [2024-04-26 21:37:59.093595] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:09.952 [2024-04-26 21:37:59.151391] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:09.952 [2024-04-26 21:37:59.151446] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:09.952 [2024-04-26 21:37:59.151454] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:09.952 [2024-04-26 21:37:59.151460] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:09.952 [2024-04-26 21:37:59.151465] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:09.952 [2024-04-26 21:37:59.151580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.952 [2024-04-26 21:37:59.151798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:09.952 [2024-04-26 21:37:59.152103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:09.952 [2024-04-26 21:37:59.152131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.885 21:37:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:10.885 21:37:59 -- common/autotest_common.sh@850 -- # return 0 00:35:10.885 21:37:59 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:10.885 21:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.885 21:37:59 -- common/autotest_common.sh@10 -- # set +x 00:35:10.885 21:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.885 21:37:59 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:10.885 21:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.885 21:37:59 -- common/autotest_common.sh@10 -- # set +x 00:35:10.885 [2024-04-26 21:37:59.945292] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:10.885 21:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.885 21:37:59 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:10.885 21:37:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.885 21:37:59 -- common/autotest_common.sh@10 -- # set +x 00:35:10.885 [2024-04-26 21:37:59.958674] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.885 21:37:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.885 21:37:59 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:10.885 21:37:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:10.885 21:37:59 -- common/autotest_common.sh@10 -- # set +x 00:35:10.885 21:38:00 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:35:10.885 21:38:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.885 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:35:10.885 Nvme0n1 00:35:10.885 21:38:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.885 21:38:00 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:10.885 21:38:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.886 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:35:10.886 21:38:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.886 21:38:00 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:10.886 21:38:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.886 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:35:10.886 21:38:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.886 21:38:00 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.886 21:38:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.886 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:35:10.886 [2024-04-26 21:38:00.094146] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.886 21:38:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.886 21:38:00 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:10.886 21:38:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:10.886 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:35:10.886 [2024-04-26 21:38:00.101886] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:35:10.886 [ 00:35:10.886 { 00:35:10.886 "allow_any_host": true, 00:35:10.886 "hosts": [], 00:35:10.886 "listen_addresses": [], 00:35:10.886 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:10.886 "subtype": "Discovery" 00:35:10.886 }, 00:35:10.886 { 00:35:10.886 "allow_any_host": true, 00:35:10.886 "hosts": [], 00:35:10.886 "listen_addresses": [ 00:35:10.886 { 00:35:10.886 "adrfam": "IPv4", 00:35:10.886 "traddr": "10.0.0.2", 00:35:10.886 "transport": "TCP", 00:35:10.886 "trsvcid": "4420", 00:35:10.886 "trtype": "TCP" 00:35:10.886 } 00:35:10.886 ], 00:35:10.886 "max_cntlid": 65519, 00:35:10.886 "max_namespaces": 1, 00:35:10.886 "min_cntlid": 1, 00:35:10.886 "model_number": "SPDK bdev Controller", 00:35:10.886 "namespaces": [ 00:35:10.886 { 00:35:10.886 "bdev_name": "Nvme0n1", 00:35:10.886 "name": "Nvme0n1", 00:35:10.886 "nguid": "0412A07FD5AE4ECE92E938B7A88B6EF1", 00:35:10.886 "nsid": 1, 00:35:10.886 "uuid": "0412a07f-d5ae-4ece-92e9-38b7a88b6ef1" 00:35:10.886 } 00:35:10.886 ], 00:35:10.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:10.886 "serial_number": "SPDK00000000000001", 00:35:10.886 "subtype": "NVMe" 00:35:10.886 } 00:35:10.886 ] 00:35:10.886 21:38:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:10.886 21:38:00 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:10.886 21:38:00 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:10.886 21:38:00 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:11.145 21:38:00 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:35:11.145 21:38:00 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:11.145 21:38:00 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:11.145 21:38:00 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:11.404 21:38:00 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:35:11.404 21:38:00 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:35:11.404 21:38:00 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:35:11.404 21:38:00 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:11.404 21:38:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:11.404 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:35:11.404 21:38:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:11.404 21:38:00 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:11.404 21:38:00 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:11.404 21:38:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:35:11.404 21:38:00 -- nvmf/common.sh@117 -- # sync 00:35:11.404 21:38:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:11.404 21:38:00 -- nvmf/common.sh@120 -- # set +e 00:35:11.404 21:38:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:11.404 21:38:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:11.404 rmmod nvme_tcp 00:35:11.663 rmmod nvme_fabrics 00:35:11.663 rmmod nvme_keyring 00:35:11.663 21:38:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:11.663 21:38:00 -- nvmf/common.sh@124 -- # set -e 00:35:11.663 21:38:00 -- nvmf/common.sh@125 -- # return 0 00:35:11.663 21:38:00 -- nvmf/common.sh@478 -- # '[' -n 108774 ']' 00:35:11.663 21:38:00 -- nvmf/common.sh@479 -- # killprocess 108774 00:35:11.663 21:38:00 -- common/autotest_common.sh@936 -- # '[' -z 108774 ']' 00:35:11.663 21:38:00 -- common/autotest_common.sh@940 -- # kill -0 108774 00:35:11.663 21:38:00 -- common/autotest_common.sh@941 -- # uname 00:35:11.663 21:38:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:35:11.663 21:38:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 108774 00:35:11.663 21:38:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:35:11.663 21:38:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:35:11.663 21:38:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 108774' 00:35:11.663 killing process with pid 108774 00:35:11.663 21:38:00 -- common/autotest_common.sh@955 -- # kill 108774 00:35:11.663 [2024-04-26 21:38:00.714788] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transpor 21:38:00 -- common/autotest_common.sh@960 -- # wait 108774 00:35:11.663 t is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:35:11.663 21:38:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:35:11.663 21:38:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:35:11.663 21:38:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:35:11.663 21:38:00 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:11.663 21:38:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:11.663 21:38:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:11.663 21:38:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:11.663 21:38:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:11.923 21:38:00 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:11.923 ************************************ 00:35:11.923 END TEST nvmf_identify_passthru 00:35:11.923 ************************************ 00:35:11.923 00:35:11.923 real 0m3.037s 00:35:11.923 user 0m7.391s 00:35:11.923 sys 0m0.832s 00:35:11.923 21:38:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:11.923 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:35:11.923 21:38:00 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:35:11.923 21:38:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:11.923 21:38:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:11.923 21:38:00 -- common/autotest_common.sh@10 -- # set +x 00:35:11.923 ************************************ 00:35:11.923 START TEST nvmf_dif 00:35:11.923 ************************************ 00:35:11.923 21:38:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:35:12.183 * Looking for test storage... 00:35:12.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:12.183 21:38:01 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:12.183 21:38:01 -- nvmf/common.sh@7 -- # uname -s 00:35:12.183 21:38:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:12.183 21:38:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:12.183 21:38:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:12.183 21:38:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:12.183 21:38:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:12.183 21:38:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:12.183 21:38:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:12.183 21:38:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:12.183 21:38:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:12.183 21:38:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:12.183 21:38:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:35:12.183 21:38:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:35:12.183 21:38:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:12.183 21:38:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:12.183 21:38:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:12.183 21:38:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:12.184 21:38:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:12.184 21:38:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:12.184 21:38:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:12.184 21:38:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:12.184 21:38:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.184 21:38:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.184 21:38:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.184 21:38:01 -- paths/export.sh@5 -- # export PATH 00:35:12.184 21:38:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:12.184 21:38:01 -- nvmf/common.sh@47 -- # : 0 00:35:12.184 21:38:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:12.184 21:38:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:12.184 21:38:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:12.184 21:38:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:12.184 21:38:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:12.184 21:38:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:12.184 21:38:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:12.184 21:38:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:12.184 21:38:01 -- target/dif.sh@15 -- # NULL_META=16 00:35:12.184 21:38:01 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:12.184 21:38:01 -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:12.184 21:38:01 -- target/dif.sh@15 -- # NULL_DIF=1 00:35:12.184 21:38:01 -- target/dif.sh@135 -- # nvmftestinit 00:35:12.184 21:38:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:35:12.184 21:38:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:12.184 21:38:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:35:12.184 21:38:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:35:12.184 21:38:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:35:12.184 21:38:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.184 21:38:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:12.184 21:38:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.184 21:38:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:35:12.184 21:38:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:35:12.184 21:38:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:35:12.184 21:38:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:35:12.184 21:38:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:35:12.184 21:38:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:35:12.184 21:38:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:12.184 21:38:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:12.184 21:38:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:12.184 21:38:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:12.184 21:38:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:12.184 21:38:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:12.184 21:38:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:12.184 21:38:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:12.184 21:38:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:12.184 21:38:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:12.184 21:38:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:12.184 21:38:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:12.184 21:38:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:12.184 21:38:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:12.184 Cannot find device "nvmf_tgt_br" 00:35:12.184 21:38:01 -- nvmf/common.sh@155 -- # true 00:35:12.184 21:38:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:12.184 Cannot find device "nvmf_tgt_br2" 00:35:12.184 21:38:01 -- nvmf/common.sh@156 -- # true 00:35:12.184 21:38:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:12.184 21:38:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:12.184 Cannot find device "nvmf_tgt_br" 00:35:12.184 21:38:01 -- nvmf/common.sh@158 -- # true 00:35:12.184 21:38:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:12.184 Cannot find device "nvmf_tgt_br2" 00:35:12.184 21:38:01 -- nvmf/common.sh@159 -- # true 00:35:12.184 21:38:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:12.184 21:38:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:12.184 21:38:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:12.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:12.184 21:38:01 -- nvmf/common.sh@162 -- # true 00:35:12.184 21:38:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:12.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:12.184 21:38:01 -- nvmf/common.sh@163 -- # true 00:35:12.184 21:38:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:12.184 21:38:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:12.184 21:38:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:12.184 21:38:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:12.443 21:38:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:12.443 21:38:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:12.443 21:38:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:12.443 21:38:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:12.443 21:38:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:12.443 21:38:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:12.443 21:38:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:12.443 21:38:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:12.443 21:38:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:12.443 21:38:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:12.443 21:38:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:12.443 21:38:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:12.443 21:38:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:12.443 21:38:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:12.443 21:38:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:12.443 21:38:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:12.443 21:38:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:12.443 21:38:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:12.443 21:38:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:12.443 21:38:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:35:12.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:12.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:35:12.443 00:35:12.443 --- 10.0.0.2 ping statistics --- 00:35:12.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.443 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:35:12.443 21:38:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:35:12.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:12.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:35:12.444 00:35:12.444 --- 10.0.0.3 ping statistics --- 00:35:12.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.444 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:35:12.444 21:38:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:12.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:12.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:35:12.444 00:35:12.444 --- 10.0.0.1 ping statistics --- 00:35:12.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.444 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:35:12.444 21:38:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:12.444 21:38:01 -- nvmf/common.sh@422 -- # return 0 00:35:12.444 21:38:01 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:35:12.444 21:38:01 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:13.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:13.028 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:13.028 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:13.028 21:38:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.029 21:38:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:35:13.029 21:38:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:35:13.029 21:38:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.029 21:38:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:35:13.029 21:38:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:35:13.029 21:38:02 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:13.029 21:38:02 -- target/dif.sh@137 -- # nvmfappstart 00:35:13.029 21:38:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:35:13.029 21:38:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:35:13.029 21:38:02 -- common/autotest_common.sh@10 -- # set +x 00:35:13.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.029 21:38:02 -- nvmf/common.sh@470 -- # nvmfpid=109125 00:35:13.029 21:38:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:13.029 21:38:02 -- nvmf/common.sh@471 -- # waitforlisten 109125 00:35:13.029 21:38:02 -- common/autotest_common.sh@817 -- # '[' -z 109125 ']' 00:35:13.029 21:38:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.029 21:38:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:35:13.029 21:38:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.029 21:38:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:35:13.029 21:38:02 -- common/autotest_common.sh@10 -- # set +x 00:35:13.029 [2024-04-26 21:38:02.147437] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:35:13.029 [2024-04-26 21:38:02.147506] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.287 [2024-04-26 21:38:02.286088] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.288 [2024-04-26 21:38:02.339162] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.288 [2024-04-26 21:38:02.339296] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.288 [2024-04-26 21:38:02.339354] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.288 [2024-04-26 21:38:02.339385] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.288 [2024-04-26 21:38:02.339402] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.288 [2024-04-26 21:38:02.339462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.855 21:38:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:35:13.855 21:38:03 -- common/autotest_common.sh@850 -- # return 0 00:35:13.855 21:38:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:35:13.855 21:38:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:35:13.855 21:38:03 -- common/autotest_common.sh@10 -- # set +x 00:35:13.855 21:38:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:13.855 21:38:03 -- target/dif.sh@139 -- # create_transport 00:35:13.855 21:38:03 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:13.855 21:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:13.855 21:38:03 -- common/autotest_common.sh@10 -- # set +x 00:35:14.114 [2024-04-26 21:38:03.114867] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.114 21:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.114 21:38:03 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:14.114 21:38:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:14.114 21:38:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:14.114 21:38:03 -- common/autotest_common.sh@10 -- # set +x 00:35:14.114 ************************************ 00:35:14.114 START TEST fio_dif_1_default 00:35:14.114 ************************************ 00:35:14.114 21:38:03 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:35:14.114 21:38:03 -- target/dif.sh@86 -- # create_subsystems 0 00:35:14.114 21:38:03 -- target/dif.sh@28 -- # local sub 00:35:14.114 21:38:03 -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.114 21:38:03 -- target/dif.sh@31 -- # create_subsystem 0 00:35:14.114 21:38:03 -- target/dif.sh@18 -- # local sub_id=0 00:35:14.114 21:38:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:14.114 21:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.114 21:38:03 -- common/autotest_common.sh@10 -- # set +x 00:35:14.114 bdev_null0 00:35:14.114 21:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.114 21:38:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:14.114 21:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.114 21:38:03 -- common/autotest_common.sh@10 -- # set +x 00:35:14.114 21:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.114 21:38:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:14.114 21:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.114 21:38:03 -- common/autotest_common.sh@10 -- # set +x 00:35:14.114 21:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.114 21:38:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:14.114 21:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:14.114 21:38:03 -- common/autotest_common.sh@10 -- # set +x 00:35:14.114 [2024-04-26 21:38:03.222779] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.114 21:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:14.114 21:38:03 -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:14.114 21:38:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.114 21:38:03 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:14.114 21:38:03 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.114 21:38:03 -- target/dif.sh@82 -- # gen_fio_conf 00:35:14.114 21:38:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:14.114 21:38:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:14.114 21:38:03 -- target/dif.sh@54 -- # local file 00:35:14.114 21:38:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:14.114 21:38:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:14.114 21:38:03 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:14.114 21:38:03 -- target/dif.sh@56 -- # cat 00:35:14.114 21:38:03 -- common/autotest_common.sh@1327 -- # shift 00:35:14.114 21:38:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:14.114 21:38:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.114 21:38:03 -- nvmf/common.sh@521 -- # config=() 00:35:14.114 21:38:03 -- nvmf/common.sh@521 -- # local subsystem config 00:35:14.114 21:38:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:14.114 21:38:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:14.114 { 00:35:14.114 "params": { 00:35:14.114 "name": "Nvme$subsystem", 00:35:14.114 "trtype": "$TEST_TRANSPORT", 00:35:14.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.114 "adrfam": "ipv4", 00:35:14.114 "trsvcid": "$NVMF_PORT", 00:35:14.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.114 "hdgst": ${hdgst:-false}, 00:35:14.114 "ddgst": ${ddgst:-false} 00:35:14.114 }, 00:35:14.114 "method": "bdev_nvme_attach_controller" 00:35:14.114 } 00:35:14.114 EOF 00:35:14.114 )") 00:35:14.114 21:38:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:14.114 21:38:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:14.114 21:38:03 -- nvmf/common.sh@543 -- # cat 00:35:14.114 21:38:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:14.114 21:38:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:14.115 21:38:03 -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.115 21:38:03 -- nvmf/common.sh@545 -- # jq . 00:35:14.115 21:38:03 -- nvmf/common.sh@546 -- # IFS=, 00:35:14.115 21:38:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:14.115 "params": { 00:35:14.115 "name": "Nvme0", 00:35:14.115 "trtype": "tcp", 00:35:14.115 "traddr": "10.0.0.2", 00:35:14.115 "adrfam": "ipv4", 00:35:14.115 "trsvcid": "4420", 00:35:14.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.115 "hdgst": false, 00:35:14.115 "ddgst": false 00:35:14.115 }, 00:35:14.115 "method": "bdev_nvme_attach_controller" 00:35:14.115 }' 00:35:14.115 21:38:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:14.115 21:38:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:14.115 21:38:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.115 21:38:03 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:14.115 21:38:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:14.115 21:38:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:14.115 21:38:03 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:14.115 21:38:03 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:14.115 21:38:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:14.115 21:38:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.375 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:14.375 fio-3.35 00:35:14.375 Starting 1 thread 00:35:26.585 00:35:26.585 filename0: (groupid=0, jobs=1): err= 0: pid=109215: Fri Apr 26 21:38:13 2024 00:35:26.585 read: IOPS=1493, BW=5974KiB/s (6117kB/s)(58.3MiB/10001msec) 00:35:26.585 slat (nsec): min=5799, max=55856, avg=7849.10, stdev=3125.01 00:35:26.585 clat (usec): min=336, max=42573, avg=2655.66, stdev=9225.58 00:35:26.585 lat (usec): min=342, max=42581, avg=2663.51, stdev=9225.59 00:35:26.585 clat percentiles (usec): 00:35:26.585 | 1.00th=[ 388], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:35:26.585 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 437], 60.00th=[ 441], 00:35:26.585 | 70.00th=[ 445], 80.00th=[ 453], 90.00th=[ 486], 95.00th=[40633], 00:35:26.585 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:35:26.585 | 99.99th=[42730] 00:35:26.585 bw ( KiB/s): min= 3360, max=10336, per=98.75%, avg=5899.79, stdev=1570.11, samples=19 00:35:26.585 iops : min= 840, max= 2584, avg=1474.95, stdev=392.53, samples=19 00:35:26.585 lat (usec) : 500=91.80%, 750=2.60%, 1000=0.01% 00:35:26.585 lat (msec) : 2=0.12%, 50=5.46% 00:35:26.585 cpu : usr=93.57%, sys=5.55%, ctx=24, majf=0, minf=9 00:35:26.585 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:26.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.585 issued rwts: total=14936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.585 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:26.585 00:35:26.585 Run status group 0 (all jobs): 00:35:26.585 READ: bw=5974KiB/s (6117kB/s), 5974KiB/s-5974KiB/s (6117kB/s-6117kB/s), io=58.3MiB (61.2MB), run=10001-10001msec 00:35:26.585 21:38:14 -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:26.585 21:38:14 -- target/dif.sh@43 -- # local sub 00:35:26.585 21:38:14 -- target/dif.sh@45 -- # for sub in "$@" 00:35:26.585 21:38:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:26.585 21:38:14 -- target/dif.sh@36 -- # local sub_id=0 00:35:26.585 21:38:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.585 21:38:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.585 00:35:26.585 real 0m10.954s 00:35:26.585 user 0m9.955s 00:35:26.585 sys 0m0.836s 00:35:26.585 21:38:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 ************************************ 00:35:26.585 END TEST fio_dif_1_default 00:35:26.585 ************************************ 00:35:26.585 21:38:14 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:26.585 21:38:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:26.585 21:38:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 ************************************ 00:35:26.585 START TEST fio_dif_1_multi_subsystems 00:35:26.585 ************************************ 00:35:26.585 21:38:14 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:35:26.585 21:38:14 -- target/dif.sh@92 -- # local files=1 00:35:26.585 21:38:14 -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:26.585 21:38:14 -- target/dif.sh@28 -- # local sub 00:35:26.585 21:38:14 -- target/dif.sh@30 -- # for sub in "$@" 00:35:26.585 21:38:14 -- target/dif.sh@31 -- # create_subsystem 0 00:35:26.585 21:38:14 -- target/dif.sh@18 -- # local sub_id=0 00:35:26.585 21:38:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 bdev_null0 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.585 21:38:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.585 21:38:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.585 21:38:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 [2024-04-26 21:38:14.325877] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.585 21:38:14 -- target/dif.sh@30 -- # for sub in "$@" 00:35:26.585 21:38:14 -- target/dif.sh@31 -- # create_subsystem 1 00:35:26.585 21:38:14 -- target/dif.sh@18 -- # local sub_id=1 00:35:26.585 21:38:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 bdev_null1 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.585 21:38:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.585 21:38:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.585 21:38:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:26.585 21:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:26.585 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:35:26.585 21:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:26.586 21:38:14 -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:26.586 21:38:14 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:26.586 21:38:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:26.586 21:38:14 -- nvmf/common.sh@521 -- # config=() 00:35:26.586 21:38:14 -- nvmf/common.sh@521 -- # local subsystem config 00:35:26.586 21:38:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:26.586 21:38:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:26.586 { 00:35:26.586 "params": { 00:35:26.586 "name": "Nvme$subsystem", 00:35:26.586 "trtype": "$TEST_TRANSPORT", 00:35:26.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.586 "adrfam": "ipv4", 00:35:26.586 "trsvcid": "$NVMF_PORT", 00:35:26.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.586 "hdgst": ${hdgst:-false}, 00:35:26.586 "ddgst": ${ddgst:-false} 00:35:26.586 }, 00:35:26.586 "method": "bdev_nvme_attach_controller" 00:35:26.586 } 00:35:26.586 EOF 00:35:26.586 )") 00:35:26.586 21:38:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.586 21:38:14 -- target/dif.sh@82 -- # gen_fio_conf 00:35:26.586 21:38:14 -- target/dif.sh@54 -- # local file 00:35:26.586 21:38:14 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.586 21:38:14 -- target/dif.sh@56 -- # cat 00:35:26.586 21:38:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:26.586 21:38:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:26.586 21:38:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:26.586 21:38:14 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:26.586 21:38:14 -- nvmf/common.sh@543 -- # cat 00:35:26.586 21:38:14 -- common/autotest_common.sh@1327 -- # shift 00:35:26.586 21:38:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:26.586 21:38:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.586 21:38:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:26.586 21:38:14 -- target/dif.sh@72 -- # (( file <= files )) 00:35:26.586 21:38:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:26.586 21:38:14 -- target/dif.sh@73 -- # cat 00:35:26.586 21:38:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:26.586 21:38:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:26.586 21:38:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:26.586 21:38:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:26.586 { 00:35:26.586 "params": { 00:35:26.586 "name": "Nvme$subsystem", 00:35:26.586 "trtype": "$TEST_TRANSPORT", 00:35:26.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.586 "adrfam": "ipv4", 00:35:26.586 "trsvcid": "$NVMF_PORT", 00:35:26.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.586 "hdgst": ${hdgst:-false}, 00:35:26.586 "ddgst": ${ddgst:-false} 00:35:26.586 }, 00:35:26.586 "method": "bdev_nvme_attach_controller" 00:35:26.586 } 00:35:26.586 EOF 00:35:26.586 )") 00:35:26.586 21:38:14 -- target/dif.sh@72 -- # (( file++ )) 00:35:26.586 21:38:14 -- target/dif.sh@72 -- # (( file <= files )) 00:35:26.586 21:38:14 -- nvmf/common.sh@543 -- # cat 00:35:26.586 21:38:14 -- nvmf/common.sh@545 -- # jq . 00:35:26.586 21:38:14 -- nvmf/common.sh@546 -- # IFS=, 00:35:26.586 21:38:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:26.586 "params": { 00:35:26.586 "name": "Nvme0", 00:35:26.586 "trtype": "tcp", 00:35:26.586 "traddr": "10.0.0.2", 00:35:26.586 "adrfam": "ipv4", 00:35:26.586 "trsvcid": "4420", 00:35:26.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:26.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:26.586 "hdgst": false, 00:35:26.586 "ddgst": false 00:35:26.586 }, 00:35:26.586 "method": "bdev_nvme_attach_controller" 00:35:26.586 },{ 00:35:26.586 "params": { 00:35:26.586 "name": "Nvme1", 00:35:26.586 "trtype": "tcp", 00:35:26.586 "traddr": "10.0.0.2", 00:35:26.586 "adrfam": "ipv4", 00:35:26.586 "trsvcid": "4420", 00:35:26.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:26.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:26.586 "hdgst": false, 00:35:26.586 "ddgst": false 00:35:26.586 }, 00:35:26.586 "method": "bdev_nvme_attach_controller" 00:35:26.586 }' 00:35:26.586 21:38:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:26.586 21:38:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:26.586 21:38:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:26.586 21:38:14 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:26.586 21:38:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:26.586 21:38:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:26.586 21:38:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:26.586 21:38:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:26.586 21:38:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:26.586 21:38:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:26.586 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:26.586 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:26.586 fio-3.35 00:35:26.586 Starting 2 threads 00:35:36.562 00:35:36.562 filename0: (groupid=0, jobs=1): err= 0: pid=109378: Fri Apr 26 21:38:25 2024 00:35:36.562 read: IOPS=223, BW=895KiB/s (917kB/s)(8960KiB/10010msec) 00:35:36.562 slat (nsec): min=5664, max=85707, avg=10284.25, stdev=6619.27 00:35:36.562 clat (usec): min=342, max=42419, avg=17842.36, stdev=19974.63 00:35:36.562 lat (usec): min=348, max=42426, avg=17852.64, stdev=19974.02 00:35:36.562 clat percentiles (usec): 00:35:36.562 | 1.00th=[ 359], 5.00th=[ 404], 10.00th=[ 420], 20.00th=[ 441], 00:35:36.562 | 30.00th=[ 469], 40.00th=[ 523], 50.00th=[ 824], 60.00th=[40633], 00:35:36.562 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:36.562 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:36.562 | 99.99th=[42206] 00:35:36.562 bw ( KiB/s): min= 512, max= 2016, per=52.27%, avg=894.40, stdev=376.27, samples=20 00:35:36.562 iops : min= 128, max= 504, avg=223.60, stdev=94.07, samples=20 00:35:36.562 lat (usec) : 500=35.98%, 750=9.73%, 1000=10.98% 00:35:36.562 lat (msec) : 2=0.27%, 4=0.18%, 50=42.86% 00:35:36.562 cpu : usr=97.44%, sys=2.15%, ctx=15, majf=0, minf=9 00:35:36.562 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.562 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.562 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:36.562 filename1: (groupid=0, jobs=1): err= 0: pid=109379: Fri Apr 26 21:38:25 2024 00:35:36.562 read: IOPS=203, BW=815KiB/s (835kB/s)(8160KiB/10007msec) 00:35:36.562 slat (nsec): min=5554, max=83252, avg=11082.76, stdev=7600.15 00:35:36.562 clat (usec): min=334, max=42175, avg=19585.51, stdev=20182.28 00:35:36.562 lat (usec): min=339, max=42190, avg=19596.60, stdev=20181.60 00:35:36.562 clat percentiles (usec): 00:35:36.562 | 1.00th=[ 363], 5.00th=[ 408], 10.00th=[ 429], 20.00th=[ 453], 00:35:36.562 | 30.00th=[ 482], 40.00th=[ 742], 50.00th=[ 922], 60.00th=[40633], 00:35:36.562 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:36.562 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:36.562 | 99.99th=[42206] 00:35:36.562 bw ( KiB/s): min= 448, max= 1280, per=47.83%, avg=818.53, stdev=222.30, samples=19 00:35:36.562 iops : min= 112, max= 320, avg=204.63, stdev=55.57, samples=19 00:35:36.562 lat (usec) : 500=34.95%, 750=5.49%, 1000=11.96% 00:35:36.562 lat (msec) : 2=0.34%, 4=0.20%, 50=47.06% 00:35:36.562 cpu : usr=97.51%, sys=2.07%, ctx=29, majf=0, minf=9 00:35:36.562 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:36.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:36.562 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:36.562 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:36.562 00:35:36.562 Run status group 0 (all jobs): 00:35:36.562 READ: bw=1710KiB/s (1751kB/s), 815KiB/s-895KiB/s (835kB/s-917kB/s), io=16.7MiB (17.5MB), run=10007-10010msec 00:35:36.562 21:38:25 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:36.562 21:38:25 -- target/dif.sh@43 -- # local sub 00:35:36.562 21:38:25 -- target/dif.sh@45 -- # for sub in "$@" 00:35:36.562 21:38:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:36.562 21:38:25 -- target/dif.sh@36 -- # local sub_id=0 00:35:36.562 21:38:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:36.562 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:36.562 21:38:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:36.562 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:36.562 21:38:25 -- target/dif.sh@45 -- # for sub in "$@" 00:35:36.562 21:38:25 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:36.562 21:38:25 -- target/dif.sh@36 -- # local sub_id=1 00:35:36.562 21:38:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:36.562 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:36.562 21:38:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:36.562 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:36.562 00:35:36.562 real 0m11.181s 00:35:36.562 user 0m20.302s 00:35:36.562 sys 0m0.705s 00:35:36.562 21:38:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:35:36.562 ************************************ 00:35:36.562 END TEST fio_dif_1_multi_subsystems 00:35:36.562 ************************************ 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 21:38:25 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:36.562 21:38:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:35:36.562 21:38:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 ************************************ 00:35:36.562 START TEST fio_dif_rand_params 00:35:36.562 ************************************ 00:35:36.562 21:38:25 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:35:36.562 21:38:25 -- target/dif.sh@100 -- # local NULL_DIF 00:35:36.562 21:38:25 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:36.562 21:38:25 -- target/dif.sh@103 -- # NULL_DIF=3 00:35:36.562 21:38:25 -- target/dif.sh@103 -- # bs=128k 00:35:36.562 21:38:25 -- target/dif.sh@103 -- # numjobs=3 00:35:36.562 21:38:25 -- target/dif.sh@103 -- # iodepth=3 00:35:36.562 21:38:25 -- target/dif.sh@103 -- # runtime=5 00:35:36.562 21:38:25 -- target/dif.sh@105 -- # create_subsystems 0 00:35:36.562 21:38:25 -- target/dif.sh@28 -- # local sub 00:35:36.562 21:38:25 -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.562 21:38:25 -- target/dif.sh@31 -- # create_subsystem 0 00:35:36.562 21:38:25 -- target/dif.sh@18 -- # local sub_id=0 00:35:36.562 21:38:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:36.562 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 bdev_null0 00:35:36.562 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:36.562 21:38:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:36.562 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:36.562 21:38:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:36.562 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:36.562 21:38:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:36.562 21:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:36.562 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:35:36.562 [2024-04-26 21:38:25.658660] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.562 21:38:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:36.562 21:38:25 -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:36.562 21:38:25 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:36.562 21:38:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:36.562 21:38:25 -- nvmf/common.sh@521 -- # config=() 00:35:36.562 21:38:25 -- nvmf/common.sh@521 -- # local subsystem config 00:35:36.562 21:38:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.562 21:38:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:36.562 21:38:25 -- target/dif.sh@82 -- # gen_fio_conf 00:35:36.562 21:38:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:36.562 { 00:35:36.562 "params": { 00:35:36.562 "name": "Nvme$subsystem", 00:35:36.562 "trtype": "$TEST_TRANSPORT", 00:35:36.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.562 "adrfam": "ipv4", 00:35:36.563 "trsvcid": "$NVMF_PORT", 00:35:36.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.563 "hdgst": ${hdgst:-false}, 00:35:36.563 "ddgst": ${ddgst:-false} 00:35:36.563 }, 00:35:36.563 "method": "bdev_nvme_attach_controller" 00:35:36.563 } 00:35:36.563 EOF 00:35:36.563 )") 00:35:36.563 21:38:25 -- target/dif.sh@54 -- # local file 00:35:36.563 21:38:25 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.563 21:38:25 -- target/dif.sh@56 -- # cat 00:35:36.563 21:38:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:36.563 21:38:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:36.563 21:38:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:36.563 21:38:25 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:36.563 21:38:25 -- common/autotest_common.sh@1327 -- # shift 00:35:36.563 21:38:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:36.563 21:38:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.563 21:38:25 -- nvmf/common.sh@543 -- # cat 00:35:36.563 21:38:25 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:36.563 21:38:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:36.563 21:38:25 -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.563 21:38:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:36.563 21:38:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:36.563 21:38:25 -- nvmf/common.sh@545 -- # jq . 00:35:36.563 21:38:25 -- nvmf/common.sh@546 -- # IFS=, 00:35:36.563 21:38:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:36.563 "params": { 00:35:36.563 "name": "Nvme0", 00:35:36.563 "trtype": "tcp", 00:35:36.563 "traddr": "10.0.0.2", 00:35:36.563 "adrfam": "ipv4", 00:35:36.563 "trsvcid": "4420", 00:35:36.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.563 "hdgst": false, 00:35:36.563 "ddgst": false 00:35:36.563 }, 00:35:36.563 "method": "bdev_nvme_attach_controller" 00:35:36.563 }' 00:35:36.563 21:38:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:36.563 21:38:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:36.563 21:38:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.563 21:38:25 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:36.563 21:38:25 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:36.563 21:38:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:36.563 21:38:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:36.563 21:38:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:36.563 21:38:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:36.563 21:38:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.822 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:36.822 ... 00:35:36.822 fio-3.35 00:35:36.822 Starting 3 threads 00:35:43.383 00:35:43.384 filename0: (groupid=0, jobs=1): err= 0: pid=109540: Fri Apr 26 21:38:31 2024 00:35:43.384 read: IOPS=261, BW=32.7MiB/s (34.2MB/s)(163MiB/5003msec) 00:35:43.384 slat (nsec): min=6337, max=63599, avg=12206.31, stdev=5670.32 00:35:43.384 clat (usec): min=5242, max=53136, avg=11457.92, stdev=3085.64 00:35:43.384 lat (usec): min=5280, max=53145, avg=11470.13, stdev=3085.79 00:35:43.384 clat percentiles (usec): 00:35:43.384 | 1.00th=[ 6521], 5.00th=[ 7701], 10.00th=[10028], 20.00th=[10552], 00:35:43.384 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:35:43.384 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:35:43.384 | 99.00th=[13698], 99.50th=[14091], 99.90th=[52691], 99.95th=[53216], 00:35:43.384 | 99.99th=[53216] 00:35:43.384 bw ( KiB/s): min=29952, max=35328, per=34.39%, avg=33408.00, stdev=1503.66, samples=10 00:35:43.384 iops : min= 234, max= 276, avg=261.00, stdev=11.75, samples=10 00:35:43.384 lat (msec) : 10=10.18%, 20=89.36%, 100=0.46% 00:35:43.384 cpu : usr=94.74%, sys=4.12%, ctx=40, majf=0, minf=0 00:35:43.384 IO depths : 1=11.4%, 2=88.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.384 issued rwts: total=1307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:43.384 filename0: (groupid=0, jobs=1): err= 0: pid=109541: Fri Apr 26 21:38:31 2024 00:35:43.384 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(136MiB/5003msec) 00:35:43.384 slat (nsec): min=5938, max=63250, avg=11132.94, stdev=5832.57 00:35:43.384 clat (usec): min=5882, max=16451, avg=13791.70, stdev=1683.71 00:35:43.384 lat (usec): min=5896, max=16466, avg=13802.83, stdev=1683.94 00:35:43.384 clat percentiles (usec): 00:35:43.384 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[12518], 20.00th=[13435], 00:35:43.384 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:35:43.384 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15270], 95.00th=[15533], 00:35:43.384 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16450], 99.95th=[16450], 00:35:43.384 | 99.99th=[16450] 00:35:43.384 bw ( KiB/s): min=26112, max=30720, per=28.54%, avg=27724.80, stdev=1277.44, samples=10 00:35:43.384 iops : min= 204, max= 240, avg=216.60, stdev= 9.98, samples=10 00:35:43.384 lat (msec) : 10=8.01%, 20=91.99% 00:35:43.384 cpu : usr=95.28%, sys=3.74%, ctx=7, majf=0, minf=0 00:35:43.384 IO depths : 1=32.8%, 2=67.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.384 issued rwts: total=1086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:43.384 filename0: (groupid=0, jobs=1): err= 0: pid=109542: Fri Apr 26 21:38:31 2024 00:35:43.384 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(176MiB/5003msec) 00:35:43.384 slat (nsec): min=6836, max=38491, avg=13053.37, stdev=4248.43 00:35:43.384 clat (usec): min=5748, max=53261, avg=10672.60, stdev=4322.70 00:35:43.384 lat (usec): min=5757, max=53273, avg=10685.65, stdev=4322.74 00:35:43.384 clat percentiles (usec): 00:35:43.384 | 1.00th=[ 6849], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9634], 00:35:43.384 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:35:43.384 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:35:43.384 | 99.00th=[49546], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:35:43.384 | 99.99th=[53216] 00:35:43.384 bw ( KiB/s): min=32000, max=39168, per=36.92%, avg=35865.60, stdev=2444.92, samples=10 00:35:43.384 iops : min= 250, max= 306, avg=280.20, stdev=19.10, samples=10 00:35:43.384 lat (msec) : 10=35.33%, 20=63.60%, 50=0.07%, 100=1.00% 00:35:43.384 cpu : usr=94.02%, sys=4.74%, ctx=92, majf=0, minf=0 00:35:43.384 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:43.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:43.384 issued rwts: total=1404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:43.384 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:43.384 00:35:43.384 Run status group 0 (all jobs): 00:35:43.384 READ: bw=94.9MiB/s (99.5MB/s), 27.1MiB/s-35.1MiB/s (28.5MB/s-36.8MB/s), io=475MiB (498MB), run=5003-5003msec 00:35:43.384 21:38:31 -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:43.384 21:38:31 -- target/dif.sh@43 -- # local sub 00:35:43.384 21:38:31 -- target/dif.sh@45 -- # for sub in "$@" 00:35:43.384 21:38:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:43.384 21:38:31 -- target/dif.sh@36 -- # local sub_id=0 00:35:43.384 21:38:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@109 -- # NULL_DIF=2 00:35:43.384 21:38:31 -- target/dif.sh@109 -- # bs=4k 00:35:43.384 21:38:31 -- target/dif.sh@109 -- # numjobs=8 00:35:43.384 21:38:31 -- target/dif.sh@109 -- # iodepth=16 00:35:43.384 21:38:31 -- target/dif.sh@109 -- # runtime= 00:35:43.384 21:38:31 -- target/dif.sh@109 -- # files=2 00:35:43.384 21:38:31 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:43.384 21:38:31 -- target/dif.sh@28 -- # local sub 00:35:43.384 21:38:31 -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.384 21:38:31 -- target/dif.sh@31 -- # create_subsystem 0 00:35:43.384 21:38:31 -- target/dif.sh@18 -- # local sub_id=0 00:35:43.384 21:38:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 bdev_null0 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 [2024-04-26 21:38:31.631885] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.384 21:38:31 -- target/dif.sh@31 -- # create_subsystem 1 00:35:43.384 21:38:31 -- target/dif.sh@18 -- # local sub_id=1 00:35:43.384 21:38:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 bdev_null1 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@30 -- # for sub in "$@" 00:35:43.384 21:38:31 -- target/dif.sh@31 -- # create_subsystem 2 00:35:43.384 21:38:31 -- target/dif.sh@18 -- # local sub_id=2 00:35:43.384 21:38:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 bdev_null2 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:43.384 21:38:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:43.384 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:35:43.384 21:38:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:43.384 21:38:31 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:43.384 21:38:31 -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:43.384 21:38:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:43.384 21:38:31 -- nvmf/common.sh@521 -- # config=() 00:35:43.384 21:38:31 -- nvmf/common.sh@521 -- # local subsystem config 00:35:43.384 21:38:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:43.384 21:38:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:43.384 { 00:35:43.385 "params": { 00:35:43.385 "name": "Nvme$subsystem", 00:35:43.385 "trtype": "$TEST_TRANSPORT", 00:35:43.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.385 "adrfam": "ipv4", 00:35:43.385 "trsvcid": "$NVMF_PORT", 00:35:43.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.385 "hdgst": ${hdgst:-false}, 00:35:43.385 "ddgst": ${ddgst:-false} 00:35:43.385 }, 00:35:43.385 "method": "bdev_nvme_attach_controller" 00:35:43.385 } 00:35:43.385 EOF 00:35:43.385 )") 00:35:43.385 21:38:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.385 21:38:31 -- target/dif.sh@82 -- # gen_fio_conf 00:35:43.385 21:38:31 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.385 21:38:31 -- target/dif.sh@54 -- # local file 00:35:43.385 21:38:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:43.385 21:38:31 -- target/dif.sh@56 -- # cat 00:35:43.385 21:38:31 -- nvmf/common.sh@543 -- # cat 00:35:43.385 21:38:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:43.385 21:38:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:43.385 21:38:31 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:43.385 21:38:31 -- common/autotest_common.sh@1327 -- # shift 00:35:43.385 21:38:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:43.385 21:38:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:43.385 21:38:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:43.385 21:38:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:43.385 { 00:35:43.385 "params": { 00:35:43.385 "name": "Nvme$subsystem", 00:35:43.385 "trtype": "$TEST_TRANSPORT", 00:35:43.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.385 "adrfam": "ipv4", 00:35:43.385 "trsvcid": "$NVMF_PORT", 00:35:43.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.385 "hdgst": ${hdgst:-false}, 00:35:43.385 "ddgst": ${ddgst:-false} 00:35:43.385 }, 00:35:43.385 "method": "bdev_nvme_attach_controller" 00:35:43.385 } 00:35:43.385 EOF 00:35:43.385 )") 00:35:43.385 21:38:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:43.385 21:38:31 -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.385 21:38:31 -- target/dif.sh@73 -- # cat 00:35:43.385 21:38:31 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:43.385 21:38:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:43.385 21:38:31 -- nvmf/common.sh@543 -- # cat 00:35:43.385 21:38:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:43.385 21:38:31 -- target/dif.sh@72 -- # (( file++ )) 00:35:43.385 21:38:31 -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.385 21:38:31 -- target/dif.sh@73 -- # cat 00:35:43.385 21:38:31 -- target/dif.sh@72 -- # (( file++ )) 00:35:43.385 21:38:31 -- target/dif.sh@72 -- # (( file <= files )) 00:35:43.385 21:38:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:43.385 21:38:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:43.385 { 00:35:43.385 "params": { 00:35:43.385 "name": "Nvme$subsystem", 00:35:43.385 "trtype": "$TEST_TRANSPORT", 00:35:43.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:43.385 "adrfam": "ipv4", 00:35:43.385 "trsvcid": "$NVMF_PORT", 00:35:43.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:43.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:43.385 "hdgst": ${hdgst:-false}, 00:35:43.385 "ddgst": ${ddgst:-false} 00:35:43.385 }, 00:35:43.385 "method": "bdev_nvme_attach_controller" 00:35:43.385 } 00:35:43.385 EOF 00:35:43.385 )") 00:35:43.385 21:38:31 -- nvmf/common.sh@543 -- # cat 00:35:43.385 21:38:31 -- nvmf/common.sh@545 -- # jq . 00:35:43.385 21:38:31 -- nvmf/common.sh@546 -- # IFS=, 00:35:43.385 21:38:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:43.385 "params": { 00:35:43.385 "name": "Nvme0", 00:35:43.385 "trtype": "tcp", 00:35:43.385 "traddr": "10.0.0.2", 00:35:43.385 "adrfam": "ipv4", 00:35:43.385 "trsvcid": "4420", 00:35:43.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:43.385 "hdgst": false, 00:35:43.385 "ddgst": false 00:35:43.385 }, 00:35:43.385 "method": "bdev_nvme_attach_controller" 00:35:43.385 },{ 00:35:43.385 "params": { 00:35:43.385 "name": "Nvme1", 00:35:43.385 "trtype": "tcp", 00:35:43.385 "traddr": "10.0.0.2", 00:35:43.385 "adrfam": "ipv4", 00:35:43.385 "trsvcid": "4420", 00:35:43.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:43.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:43.385 "hdgst": false, 00:35:43.385 "ddgst": false 00:35:43.385 }, 00:35:43.385 "method": "bdev_nvme_attach_controller" 00:35:43.385 },{ 00:35:43.385 "params": { 00:35:43.385 "name": "Nvme2", 00:35:43.385 "trtype": "tcp", 00:35:43.385 "traddr": "10.0.0.2", 00:35:43.385 "adrfam": "ipv4", 00:35:43.385 "trsvcid": "4420", 00:35:43.385 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:43.385 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:43.385 "hdgst": false, 00:35:43.385 "ddgst": false 00:35:43.385 }, 00:35:43.385 "method": "bdev_nvme_attach_controller" 00:35:43.385 }' 00:35:43.385 21:38:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:43.385 21:38:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:43.385 21:38:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:43.385 21:38:31 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:43.385 21:38:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:43.385 21:38:31 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:43.385 21:38:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:43.385 21:38:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:43.385 21:38:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:43.385 21:38:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.385 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:43.385 ... 00:35:43.385 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:43.385 ... 00:35:43.385 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:43.385 ... 00:35:43.385 fio-3.35 00:35:43.385 Starting 24 threads 00:35:55.582 00:35:55.582 filename0: (groupid=0, jobs=1): err= 0: pid=109638: Fri Apr 26 21:38:42 2024 00:35:55.582 read: IOPS=232, BW=929KiB/s (951kB/s)(9320KiB/10036msec) 00:35:55.582 slat (usec): min=6, max=8033, avg=20.84, stdev=262.50 00:35:55.582 clat (msec): min=27, max=203, avg=68.65, stdev=22.60 00:35:55.582 lat (msec): min=27, max=203, avg=68.68, stdev=22.60 00:35:55.582 clat percentiles (msec): 00:35:55.582 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 48], 00:35:55.582 | 30.00th=[ 54], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:35:55.582 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 109], 00:35:55.582 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 205], 99.95th=[ 205], 00:35:55.582 | 99.99th=[ 205] 00:35:55.582 bw ( KiB/s): min= 600, max= 1354, per=4.73%, avg=929.65, stdev=195.27, samples=20 00:35:55.582 iops : min= 150, max= 338, avg=232.30, stdev=48.71, samples=20 00:35:55.582 lat (msec) : 50=26.57%, 100=64.12%, 250=9.31% 00:35:55.582 cpu : usr=39.14%, sys=0.58%, ctx=1096, majf=0, minf=9 00:35:55.582 IO depths : 1=0.6%, 2=1.4%, 4=9.1%, 8=76.1%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:55.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 issued rwts: total=2330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.582 filename0: (groupid=0, jobs=1): err= 0: pid=109639: Fri Apr 26 21:38:42 2024 00:35:55.582 read: IOPS=177, BW=709KiB/s (726kB/s)(7104KiB/10016msec) 00:35:55.582 slat (usec): min=6, max=8036, avg=16.63, stdev=190.47 00:35:55.582 clat (msec): min=29, max=189, avg=90.15, stdev=25.81 00:35:55.582 lat (msec): min=29, max=189, avg=90.16, stdev=25.82 00:35:55.582 clat percentiles (msec): 00:35:55.582 | 1.00th=[ 36], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 71], 00:35:55.582 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 94], 00:35:55.582 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 142], 00:35:55.582 | 99.00th=[ 171], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 190], 00:35:55.582 | 99.99th=[ 190] 00:35:55.582 bw ( KiB/s): min= 512, max= 896, per=3.60%, avg=707.05, stdev=120.97, samples=19 00:35:55.582 iops : min= 128, max= 224, avg=176.74, stdev=30.26, samples=19 00:35:55.582 lat (msec) : 50=4.90%, 100=64.75%, 250=30.35% 00:35:55.582 cpu : usr=32.69%, sys=0.51%, ctx=919, majf=0, minf=9 00:35:55.582 IO depths : 1=1.6%, 2=3.8%, 4=13.1%, 8=69.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:55.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.582 filename0: (groupid=0, jobs=1): err= 0: pid=109640: Fri Apr 26 21:38:42 2024 00:35:55.582 read: IOPS=209, BW=837KiB/s (857kB/s)(8396KiB/10031msec) 00:35:55.582 slat (usec): min=5, max=8025, avg=25.65, stdev=349.53 00:35:55.582 clat (msec): min=21, max=179, avg=76.32, stdev=24.74 00:35:55.582 lat (msec): min=21, max=179, avg=76.35, stdev=24.74 00:35:55.582 clat percentiles (msec): 00:35:55.582 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 58], 00:35:55.582 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:35:55.582 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 124], 00:35:55.582 | 99.00th=[ 144], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:35:55.582 | 99.99th=[ 180] 00:35:55.582 bw ( KiB/s): min= 552, max= 1120, per=4.24%, avg=834.11, stdev=153.76, samples=19 00:35:55.582 iops : min= 138, max= 280, avg=208.53, stdev=38.44, samples=19 00:35:55.582 lat (msec) : 50=14.77%, 100=68.70%, 250=16.53% 00:35:55.582 cpu : usr=32.72%, sys=0.48%, ctx=884, majf=0, minf=9 00:35:55.582 IO depths : 1=1.0%, 2=2.4%, 4=9.7%, 8=74.0%, 16=12.9%, 32=0.0%, >=64=0.0% 00:35:55.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.582 filename0: (groupid=0, jobs=1): err= 0: pid=109641: Fri Apr 26 21:38:42 2024 00:35:55.582 read: IOPS=214, BW=860KiB/s (881kB/s)(8628KiB/10034msec) 00:35:55.582 slat (nsec): min=4296, max=53871, avg=11127.20, stdev=4719.40 00:35:55.582 clat (msec): min=27, max=167, avg=74.30, stdev=24.67 00:35:55.582 lat (msec): min=27, max=167, avg=74.31, stdev=24.67 00:35:55.582 clat percentiles (msec): 00:35:55.582 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 50], 00:35:55.582 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 74], 00:35:55.582 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:35:55.582 | 99.00th=[ 148], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:35:55.582 | 99.99th=[ 169] 00:35:55.582 bw ( KiB/s): min= 560, max= 1074, per=4.36%, avg=856.75, stdev=170.20, samples=20 00:35:55.582 iops : min= 140, max= 268, avg=214.10, stdev=42.50, samples=20 00:35:55.582 lat (msec) : 50=20.45%, 100=63.56%, 250=15.99% 00:35:55.582 cpu : usr=32.62%, sys=0.53%, ctx=860, majf=0, minf=9 00:35:55.582 IO depths : 1=0.6%, 2=1.3%, 4=7.6%, 8=77.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:35:55.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 issued rwts: total=2157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.582 filename0: (groupid=0, jobs=1): err= 0: pid=109642: Fri Apr 26 21:38:42 2024 00:35:55.582 read: IOPS=190, BW=761KiB/s (780kB/s)(7624KiB/10013msec) 00:35:55.582 slat (usec): min=4, max=4024, avg=15.91, stdev=130.01 00:35:55.582 clat (msec): min=30, max=159, avg=83.94, stdev=26.71 00:35:55.582 lat (msec): min=30, max=159, avg=83.96, stdev=26.71 00:35:55.582 clat percentiles (msec): 00:35:55.582 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 63], 00:35:55.582 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 90], 00:35:55.582 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 123], 95.00th=[ 132], 00:35:55.582 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:35:55.582 | 99.99th=[ 161] 00:35:55.582 bw ( KiB/s): min= 512, max= 1024, per=3.84%, avg=755.26, stdev=175.92, samples=19 00:35:55.582 iops : min= 128, max= 256, avg=188.79, stdev=43.99, samples=19 00:35:55.582 lat (msec) : 50=10.39%, 100=62.12%, 250=27.49% 00:35:55.582 cpu : usr=39.52%, sys=0.53%, ctx=1110, majf=0, minf=9 00:35:55.582 IO depths : 1=1.6%, 2=4.1%, 4=13.4%, 8=69.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:55.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.582 filename0: (groupid=0, jobs=1): err= 0: pid=109643: Fri Apr 26 21:38:42 2024 00:35:55.582 read: IOPS=187, BW=751KiB/s (769kB/s)(7536KiB/10034msec) 00:35:55.582 slat (nsec): min=3549, max=54221, avg=11179.79, stdev=4395.14 00:35:55.582 clat (msec): min=33, max=161, avg=85.11, stdev=25.41 00:35:55.582 lat (msec): min=33, max=161, avg=85.12, stdev=25.41 00:35:55.582 clat percentiles (msec): 00:35:55.582 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 65], 00:35:55.582 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 89], 00:35:55.582 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 132], 00:35:55.582 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 161], 00:35:55.582 | 99.99th=[ 161] 00:35:55.582 bw ( KiB/s): min= 512, max= 896, per=3.74%, avg=736.84, stdev=144.05, samples=19 00:35:55.582 iops : min= 128, max= 224, avg=184.21, stdev=36.01, samples=19 00:35:55.582 lat (msec) : 50=5.84%, 100=65.61%, 250=28.56% 00:35:55.582 cpu : usr=44.57%, sys=0.90%, ctx=1507, majf=0, minf=9 00:35:55.582 IO depths : 1=3.8%, 2=8.2%, 4=19.2%, 8=59.7%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:55.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.582 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.582 filename0: (groupid=0, jobs=1): err= 0: pid=109644: Fri Apr 26 21:38:42 2024 00:35:55.582 read: IOPS=180, BW=722KiB/s (739kB/s)(7220KiB/10002msec) 00:35:55.582 slat (usec): min=4, max=3231, avg=14.19, stdev=96.03 00:35:55.582 clat (msec): min=6, max=206, avg=88.52, stdev=28.08 00:35:55.582 lat (msec): min=6, max=206, avg=88.54, stdev=28.09 00:35:55.582 clat percentiles (msec): 00:35:55.582 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 57], 20.00th=[ 67], 00:35:55.582 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 86], 60.00th=[ 95], 00:35:55.582 | 70.00th=[ 103], 80.00th=[ 109], 90.00th=[ 123], 95.00th=[ 140], 00:35:55.582 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 207], 99.95th=[ 207], 00:35:55.582 | 99.99th=[ 207] 00:35:55.582 bw ( KiB/s): min= 384, max= 944, per=3.62%, avg=712.84, stdev=151.48, samples=19 00:35:55.582 iops : min= 96, max= 236, avg=178.21, stdev=37.87, samples=19 00:35:55.583 lat (msec) : 10=0.28%, 50=6.76%, 100=60.66%, 250=32.30% 00:35:55.583 cpu : usr=39.45%, sys=0.68%, ctx=1239, majf=0, minf=9 00:35:55.583 IO depths : 1=2.9%, 2=6.5%, 4=17.1%, 8=63.8%, 16=9.8%, 32=0.0%, >=64=0.0% 00:35:55.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 issued rwts: total=1805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.583 filename0: (groupid=0, jobs=1): err= 0: pid=109645: Fri Apr 26 21:38:42 2024 00:35:55.583 read: IOPS=179, BW=719KiB/s (736kB/s)(7192KiB/10003msec) 00:35:55.583 slat (usec): min=3, max=8020, avg=22.32, stdev=249.92 00:35:55.583 clat (msec): min=26, max=190, avg=88.76, stdev=26.06 00:35:55.583 lat (msec): min=26, max=190, avg=88.79, stdev=26.05 00:35:55.583 clat percentiles (msec): 00:35:55.583 | 1.00th=[ 42], 5.00th=[ 55], 10.00th=[ 63], 20.00th=[ 67], 00:35:55.583 | 30.00th=[ 71], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 93], 00:35:55.583 | 70.00th=[ 104], 80.00th=[ 111], 90.00th=[ 123], 95.00th=[ 133], 00:35:55.583 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 190], 99.95th=[ 190], 00:35:55.583 | 99.99th=[ 190] 00:35:55.583 bw ( KiB/s): min= 512, max= 896, per=3.62%, avg=711.84, stdev=132.41, samples=19 00:35:55.583 iops : min= 128, max= 224, avg=177.95, stdev=33.11, samples=19 00:35:55.583 lat (msec) : 50=3.73%, 100=65.41%, 250=30.87% 00:35:55.583 cpu : usr=39.19%, sys=0.61%, ctx=1066, majf=0, minf=9 00:35:55.583 IO depths : 1=3.4%, 2=7.5%, 4=19.1%, 8=60.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:35:55.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 issued rwts: total=1798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.583 filename1: (groupid=0, jobs=1): err= 0: pid=109646: Fri Apr 26 21:38:42 2024 00:35:55.583 read: IOPS=213, BW=855KiB/s (876kB/s)(8584KiB/10037msec) 00:35:55.583 slat (nsec): min=5106, max=35898, avg=11099.36, stdev=4285.87 00:35:55.583 clat (msec): min=27, max=167, avg=74.66, stdev=24.64 00:35:55.583 lat (msec): min=27, max=167, avg=74.67, stdev=24.64 00:35:55.583 clat percentiles (msec): 00:35:55.583 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 51], 00:35:55.583 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:35:55.583 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 121], 00:35:55.583 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 169], 00:35:55.583 | 99.99th=[ 169] 00:35:55.583 bw ( KiB/s): min= 576, max= 1120, per=4.34%, avg=854.40, stdev=183.80, samples=20 00:35:55.583 iops : min= 144, max= 280, avg=213.50, stdev=45.96, samples=20 00:35:55.583 lat (msec) : 50=19.11%, 100=65.66%, 250=15.24% 00:35:55.583 cpu : usr=32.62%, sys=0.55%, ctx=914, majf=0, minf=9 00:35:55.583 IO depths : 1=1.3%, 2=2.9%, 4=9.7%, 8=73.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:55.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 complete : 0=0.0%, 4=90.0%, 8=5.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.583 filename1: (groupid=0, jobs=1): err= 0: pid=109647: Fri Apr 26 21:38:42 2024 00:35:55.583 read: IOPS=210, BW=841KiB/s (861kB/s)(8444KiB/10038msec) 00:35:55.583 slat (usec): min=6, max=8031, avg=25.96, stdev=348.61 00:35:55.583 clat (msec): min=11, max=168, avg=75.88, stdev=26.73 00:35:55.583 lat (msec): min=12, max=168, avg=75.91, stdev=26.74 00:35:55.583 clat percentiles (msec): 00:35:55.583 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 52], 00:35:55.583 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 82], 00:35:55.583 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 130], 00:35:55.583 | 99.00th=[ 146], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:35:55.583 | 99.99th=[ 169] 00:35:55.583 bw ( KiB/s): min= 512, max= 1232, per=4.26%, avg=837.90, stdev=187.98, samples=20 00:35:55.583 iops : min= 128, max= 308, avg=209.40, stdev=46.95, samples=20 00:35:55.583 lat (msec) : 20=0.33%, 50=17.95%, 100=65.56%, 250=16.15% 00:35:55.583 cpu : usr=32.64%, sys=0.50%, ctx=866, majf=0, minf=0 00:35:55.583 IO depths : 1=1.1%, 2=2.3%, 4=9.6%, 8=74.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:55.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 issued rwts: total=2111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.583 filename1: (groupid=0, jobs=1): err= 0: pid=109648: Fri Apr 26 21:38:42 2024 00:35:55.583 read: IOPS=177, BW=710KiB/s (727kB/s)(7104KiB/10009msec) 00:35:55.583 slat (nsec): min=3479, max=89080, avg=11484.63, stdev=5013.45 00:35:55.583 clat (msec): min=28, max=187, avg=90.06, stdev=24.48 00:35:55.583 lat (msec): min=28, max=187, avg=90.07, stdev=24.48 00:35:55.583 clat percentiles (msec): 00:35:55.583 | 1.00th=[ 40], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 70], 00:35:55.583 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 95], 00:35:55.583 | 70.00th=[ 104], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 129], 00:35:55.583 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 188], 99.95th=[ 188], 00:35:55.583 | 99.99th=[ 188] 00:35:55.583 bw ( KiB/s): min= 512, max= 896, per=3.56%, avg=700.63, stdev=137.49, samples=19 00:35:55.583 iops : min= 128, max= 224, avg=175.16, stdev=34.37, samples=19 00:35:55.583 lat (msec) : 50=3.32%, 100=63.12%, 250=33.56% 00:35:55.583 cpu : usr=42.31%, sys=0.70%, ctx=1210, majf=0, minf=9 00:35:55.583 IO depths : 1=4.1%, 2=9.0%, 4=21.2%, 8=57.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:35:55.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.583 filename1: (groupid=0, jobs=1): err= 0: pid=109649: Fri Apr 26 21:38:42 2024 00:35:55.583 read: IOPS=201, BW=806KiB/s (825kB/s)(8084KiB/10032msec) 00:35:55.583 slat (usec): min=4, max=8021, avg=19.65, stdev=228.14 00:35:55.583 clat (msec): min=28, max=181, avg=79.12, stdev=28.44 00:35:55.583 lat (msec): min=28, max=181, avg=79.13, stdev=28.44 00:35:55.583 clat percentiles (msec): 00:35:55.583 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 55], 00:35:55.583 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 81], 00:35:55.583 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 132], 00:35:55.583 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 182], 99.95th=[ 182], 00:35:55.583 | 99.99th=[ 182] 00:35:55.583 bw ( KiB/s): min= 512, max= 1232, per=4.12%, avg=810.53, stdev=211.96, samples=19 00:35:55.583 iops : min= 128, max= 308, avg=202.63, stdev=52.99, samples=19 00:35:55.583 lat (msec) : 50=14.65%, 100=60.22%, 250=25.14% 00:35:55.583 cpu : usr=46.40%, sys=0.79%, ctx=1597, majf=0, minf=9 00:35:55.583 IO depths : 1=1.8%, 2=3.9%, 4=11.9%, 8=70.9%, 16=11.6%, 32=0.0%, >=64=0.0% 00:35:55.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 issued rwts: total=2021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.583 filename1: (groupid=0, jobs=1): err= 0: pid=109650: Fri Apr 26 21:38:42 2024 00:35:55.583 read: IOPS=195, BW=783KiB/s (802kB/s)(7836KiB/10006msec) 00:35:55.583 slat (usec): min=6, max=8024, avg=21.35, stdev=271.29 00:35:55.583 clat (msec): min=8, max=170, avg=81.56, stdev=27.26 00:35:55.583 lat (msec): min=8, max=170, avg=81.58, stdev=27.27 00:35:55.583 clat percentiles (msec): 00:35:55.583 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:35:55.583 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:35:55.583 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 136], 00:35:55.583 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 171], 99.95th=[ 171], 00:35:55.583 | 99.99th=[ 171] 00:35:55.583 bw ( KiB/s): min= 512, max= 1024, per=3.93%, avg=772.32, stdev=143.39, samples=19 00:35:55.583 iops : min= 128, max= 256, avg=193.05, stdev=35.83, samples=19 00:35:55.583 lat (msec) : 10=0.31%, 50=11.84%, 100=65.54%, 250=22.31% 00:35:55.583 cpu : usr=32.78%, sys=0.41%, ctx=874, majf=0, minf=9 00:35:55.583 IO depths : 1=1.6%, 2=3.5%, 4=11.0%, 8=72.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:55.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 complete : 0=0.0%, 4=90.5%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 issued rwts: total=1959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.583 filename1: (groupid=0, jobs=1): err= 0: pid=109651: Fri Apr 26 21:38:42 2024 00:35:55.583 read: IOPS=181, BW=726KiB/s (743kB/s)(7272KiB/10016msec) 00:35:55.583 slat (usec): min=5, max=4022, avg=13.03, stdev=94.20 00:35:55.583 clat (msec): min=36, max=188, avg=88.02, stdev=28.12 00:35:55.583 lat (msec): min=36, max=188, avg=88.04, stdev=28.12 00:35:55.583 clat percentiles (msec): 00:35:55.583 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 65], 00:35:55.583 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 93], 00:35:55.583 | 70.00th=[ 103], 80.00th=[ 111], 90.00th=[ 123], 95.00th=[ 140], 00:35:55.583 | 99.00th=[ 163], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 188], 00:35:55.583 | 99.99th=[ 188] 00:35:55.583 bw ( KiB/s): min= 512, max= 1072, per=3.69%, avg=725.05, stdev=164.32, samples=19 00:35:55.583 iops : min= 128, max= 268, avg=181.26, stdev=41.08, samples=19 00:35:55.583 lat (msec) : 50=6.88%, 100=61.06%, 250=32.07% 00:35:55.583 cpu : usr=41.43%, sys=0.85%, ctx=1343, majf=0, minf=9 00:35:55.583 IO depths : 1=3.8%, 2=7.9%, 4=18.9%, 8=60.5%, 16=8.9%, 32=0.0%, >=64=0.0% 00:35:55.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.583 issued rwts: total=1818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.583 filename1: (groupid=0, jobs=1): err= 0: pid=109652: Fri Apr 26 21:38:42 2024 00:35:55.583 read: IOPS=249, BW=997KiB/s (1021kB/s)(9996KiB/10028msec) 00:35:55.583 slat (usec): min=3, max=8071, avg=23.47, stdev=295.61 00:35:55.583 clat (msec): min=2, max=142, avg=64.02, stdev=23.42 00:35:55.583 lat (msec): min=2, max=142, avg=64.04, stdev=23.42 00:35:55.583 clat percentiles (msec): 00:35:55.583 | 1.00th=[ 4], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 47], 00:35:55.584 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 64], 60.00th=[ 71], 00:35:55.584 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 108], 00:35:55.584 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:35:55.584 | 99.99th=[ 144] 00:35:55.584 bw ( KiB/s): min= 688, max= 1768, per=5.05%, avg=993.20, stdev=241.40, samples=20 00:35:55.584 iops : min= 172, max= 442, avg=248.30, stdev=60.35, samples=20 00:35:55.584 lat (msec) : 4=1.28%, 10=2.20%, 50=27.33%, 100=62.06%, 250=7.12% 00:35:55.584 cpu : usr=38.55%, sys=0.55%, ctx=1177, majf=0, minf=9 00:35:55.584 IO depths : 1=0.2%, 2=0.3%, 4=4.8%, 8=80.6%, 16=14.0%, 32=0.0%, >=64=0.0% 00:35:55.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 complete : 0=0.0%, 4=88.8%, 8=7.3%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 issued rwts: total=2499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.584 filename1: (groupid=0, jobs=1): err= 0: pid=109653: Fri Apr 26 21:38:42 2024 00:35:55.584 read: IOPS=223, BW=894KiB/s (916kB/s)(8968KiB/10030msec) 00:35:55.584 slat (usec): min=6, max=8017, avg=14.95, stdev=169.18 00:35:55.584 clat (msec): min=7, max=167, avg=71.39, stdev=25.17 00:35:55.584 lat (msec): min=7, max=167, avg=71.41, stdev=25.18 00:35:55.584 clat percentiles (msec): 00:35:55.584 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:35:55.584 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:35:55.584 | 70.00th=[ 84], 80.00th=[ 90], 90.00th=[ 108], 95.00th=[ 121], 00:35:55.584 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:35:55.584 | 99.99th=[ 167] 00:35:55.584 bw ( KiB/s): min= 640, max= 1198, per=4.54%, avg=893.55, stdev=161.13, samples=20 00:35:55.584 iops : min= 160, max= 299, avg=223.35, stdev=40.24, samples=20 00:35:55.584 lat (msec) : 10=0.71%, 20=0.71%, 50=23.82%, 100=61.37%, 250=13.38% 00:35:55.584 cpu : usr=32.66%, sys=0.52%, ctx=904, majf=0, minf=9 00:35:55.584 IO depths : 1=1.2%, 2=2.5%, 4=8.9%, 8=75.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:55.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 issued rwts: total=2242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.584 filename2: (groupid=0, jobs=1): err= 0: pid=109654: Fri Apr 26 21:38:42 2024 00:35:55.584 read: IOPS=199, BW=798KiB/s (817kB/s)(8012KiB/10036msec) 00:35:55.584 slat (usec): min=6, max=8023, avg=21.00, stdev=268.21 00:35:55.584 clat (msec): min=25, max=182, avg=80.05, stdev=29.06 00:35:55.584 lat (msec): min=25, max=182, avg=80.07, stdev=29.05 00:35:55.584 clat percentiles (msec): 00:35:55.584 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 57], 00:35:55.584 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 84], 00:35:55.584 | 70.00th=[ 91], 80.00th=[ 99], 90.00th=[ 121], 95.00th=[ 144], 00:35:55.584 | 99.00th=[ 159], 99.50th=[ 180], 99.90th=[ 182], 99.95th=[ 182], 00:35:55.584 | 99.99th=[ 182] 00:35:55.584 bw ( KiB/s): min= 424, max= 1122, per=4.04%, avg=795.15, stdev=205.38, samples=20 00:35:55.584 iops : min= 106, max= 280, avg=198.70, stdev=51.28, samples=20 00:35:55.584 lat (msec) : 50=16.87%, 100=64.25%, 250=18.87% 00:35:55.584 cpu : usr=32.76%, sys=0.49%, ctx=876, majf=0, minf=9 00:35:55.584 IO depths : 1=1.1%, 2=2.6%, 4=10.3%, 8=73.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:55.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.584 filename2: (groupid=0, jobs=1): err= 0: pid=109655: Fri Apr 26 21:38:42 2024 00:35:55.584 read: IOPS=223, BW=896KiB/s (917kB/s)(8984KiB/10031msec) 00:35:55.584 slat (usec): min=4, max=8024, avg=21.81, stdev=292.74 00:35:55.584 clat (msec): min=22, max=144, avg=71.31, stdev=24.02 00:35:55.584 lat (msec): min=22, max=144, avg=71.33, stdev=24.02 00:35:55.584 clat percentiles (msec): 00:35:55.584 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:35:55.584 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:35:55.584 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 120], 00:35:55.584 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:35:55.584 | 99.99th=[ 146] 00:35:55.584 bw ( KiB/s): min= 680, max= 1248, per=4.51%, avg=886.32, stdev=179.37, samples=19 00:35:55.584 iops : min= 170, max= 312, avg=221.58, stdev=44.84, samples=19 00:35:55.584 lat (msec) : 50=26.27%, 100=62.56%, 250=11.18% 00:35:55.584 cpu : usr=32.56%, sys=0.60%, ctx=857, majf=0, minf=9 00:35:55.584 IO depths : 1=0.4%, 2=0.7%, 4=6.3%, 8=79.2%, 16=13.5%, 32=0.0%, >=64=0.0% 00:35:55.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 complete : 0=0.0%, 4=89.1%, 8=6.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.584 filename2: (groupid=0, jobs=1): err= 0: pid=109656: Fri Apr 26 21:38:42 2024 00:35:55.584 read: IOPS=185, BW=742KiB/s (760kB/s)(7432KiB/10015msec) 00:35:55.584 slat (usec): min=3, max=8024, avg=21.38, stdev=242.56 00:35:55.584 clat (msec): min=36, max=163, avg=86.04, stdev=26.91 00:35:55.584 lat (msec): min=36, max=163, avg=86.06, stdev=26.90 00:35:55.584 clat percentiles (msec): 00:35:55.584 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 66], 00:35:55.584 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 90], 00:35:55.584 | 70.00th=[ 100], 80.00th=[ 109], 90.00th=[ 123], 95.00th=[ 140], 00:35:55.584 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 165], 99.95th=[ 165], 00:35:55.584 | 99.99th=[ 165] 00:35:55.584 bw ( KiB/s): min= 512, max= 1152, per=3.73%, avg=733.05, stdev=177.95, samples=19 00:35:55.584 iops : min= 128, max= 288, avg=183.26, stdev=44.49, samples=19 00:35:55.584 lat (msec) : 50=7.37%, 100=63.40%, 250=29.22% 00:35:55.584 cpu : usr=39.50%, sys=0.58%, ctx=1252, majf=0, minf=9 00:35:55.584 IO depths : 1=3.3%, 2=7.4%, 4=18.2%, 8=61.6%, 16=9.4%, 32=0.0%, >=64=0.0% 00:35:55.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 complete : 0=0.0%, 4=92.2%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 issued rwts: total=1858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.584 filename2: (groupid=0, jobs=1): err= 0: pid=109657: Fri Apr 26 21:38:42 2024 00:35:55.584 read: IOPS=190, BW=763KiB/s (781kB/s)(7656KiB/10036msec) 00:35:55.584 slat (usec): min=3, max=8023, avg=17.56, stdev=204.81 00:35:55.584 clat (msec): min=34, max=172, avg=83.76, stdev=25.50 00:35:55.584 lat (msec): min=34, max=172, avg=83.78, stdev=25.49 00:35:55.584 clat percentiles (msec): 00:35:55.584 | 1.00th=[ 43], 5.00th=[ 54], 10.00th=[ 58], 20.00th=[ 65], 00:35:55.584 | 30.00th=[ 69], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 84], 00:35:55.584 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 122], 95.00th=[ 136], 00:35:55.584 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 174], 99.95th=[ 174], 00:35:55.584 | 99.99th=[ 174] 00:35:55.584 bw ( KiB/s): min= 512, max= 1024, per=3.86%, avg=759.30, stdev=163.13, samples=20 00:35:55.584 iops : min= 128, max= 256, avg=189.75, stdev=40.75, samples=20 00:35:55.584 lat (msec) : 50=2.61%, 100=75.81%, 250=21.58% 00:35:55.584 cpu : usr=42.31%, sys=0.58%, ctx=1082, majf=0, minf=9 00:35:55.584 IO depths : 1=2.7%, 2=6.2%, 4=16.2%, 8=63.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:35:55.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 complete : 0=0.0%, 4=92.0%, 8=3.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 issued rwts: total=1914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.584 filename2: (groupid=0, jobs=1): err= 0: pid=109658: Fri Apr 26 21:38:42 2024 00:35:55.584 read: IOPS=213, BW=854KiB/s (875kB/s)(8548KiB/10005msec) 00:35:55.584 slat (usec): min=3, max=4040, avg=15.10, stdev=123.13 00:35:55.584 clat (msec): min=15, max=147, avg=74.81, stdev=22.56 00:35:55.584 lat (msec): min=15, max=147, avg=74.82, stdev=22.56 00:35:55.584 clat percentiles (msec): 00:35:55.584 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 57], 00:35:55.584 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 80], 00:35:55.584 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 115], 00:35:55.584 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:35:55.584 | 99.99th=[ 148] 00:35:55.584 bw ( KiB/s): min= 568, max= 1282, per=4.30%, avg=845.63, stdev=174.30, samples=19 00:35:55.584 iops : min= 142, max= 320, avg=211.37, stdev=43.51, samples=19 00:35:55.584 lat (msec) : 20=0.75%, 50=14.09%, 100=71.74%, 250=13.43% 00:35:55.584 cpu : usr=44.34%, sys=0.71%, ctx=1289, majf=0, minf=9 00:35:55.584 IO depths : 1=1.4%, 2=3.1%, 4=10.2%, 8=72.7%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:55.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 complete : 0=0.0%, 4=90.5%, 8=5.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.584 filename2: (groupid=0, jobs=1): err= 0: pid=109659: Fri Apr 26 21:38:42 2024 00:35:55.584 read: IOPS=212, BW=850KiB/s (870kB/s)(8528KiB/10038msec) 00:35:55.584 slat (usec): min=6, max=4059, avg=14.58, stdev=123.69 00:35:55.584 clat (msec): min=13, max=145, avg=75.12, stdev=25.49 00:35:55.584 lat (msec): min=13, max=145, avg=75.14, stdev=25.49 00:35:55.584 clat percentiles (msec): 00:35:55.584 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 52], 00:35:55.584 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 81], 00:35:55.584 | 70.00th=[ 88], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 122], 00:35:55.584 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:35:55.584 | 99.99th=[ 146] 00:35:55.584 bw ( KiB/s): min= 512, max= 1122, per=4.31%, avg=848.75, stdev=199.44, samples=20 00:35:55.584 iops : min= 128, max= 280, avg=212.10, stdev=49.84, samples=20 00:35:55.584 lat (msec) : 20=0.75%, 50=16.51%, 100=64.73%, 250=18.01% 00:35:55.584 cpu : usr=47.35%, sys=0.75%, ctx=1318, majf=0, minf=9 00:35:55.584 IO depths : 1=1.9%, 2=4.0%, 4=13.4%, 8=69.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:55.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 complete : 0=0.0%, 4=90.5%, 8=4.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.584 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.584 filename2: (groupid=0, jobs=1): err= 0: pid=109660: Fri Apr 26 21:38:42 2024 00:35:55.584 read: IOPS=248, BW=994KiB/s (1018kB/s)(9968KiB/10024msec) 00:35:55.584 slat (usec): min=3, max=8025, avg=18.40, stdev=240.83 00:35:55.585 clat (usec): min=1481, max=131811, avg=64177.17, stdev=24493.49 00:35:55.585 lat (usec): min=1488, max=131828, avg=64195.57, stdev=24496.01 00:35:55.585 clat percentiles (usec): 00:35:55.585 | 1.00th=[ 1582], 5.00th=[ 6325], 10.00th=[ 39584], 20.00th=[ 46924], 00:35:55.585 | 30.00th=[ 51643], 40.00th=[ 58983], 50.00th=[ 63701], 60.00th=[ 70779], 00:35:55.585 | 70.00th=[ 74974], 80.00th=[ 82314], 90.00th=[ 94897], 95.00th=[105382], 00:35:55.585 | 99.00th=[121111], 99.50th=[128451], 99.90th=[131597], 99.95th=[131597], 00:35:55.585 | 99.99th=[131597] 00:35:55.585 bw ( KiB/s): min= 688, max= 2176, per=5.04%, avg=990.40, stdev=320.54, samples=20 00:35:55.585 iops : min= 172, max= 544, avg=247.60, stdev=80.14, samples=20 00:35:55.585 lat (msec) : 2=1.93%, 4=1.93%, 10=1.28%, 20=0.64%, 50=21.47% 00:35:55.585 lat (msec) : 100=65.25%, 250=7.50% 00:35:55.585 cpu : usr=46.36%, sys=0.72%, ctx=1084, majf=0, minf=0 00:35:55.585 IO depths : 1=1.1%, 2=2.2%, 4=8.8%, 8=75.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:35:55.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.585 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.585 issued rwts: total=2492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.585 filename2: (groupid=0, jobs=1): err= 0: pid=109661: Fri Apr 26 21:38:42 2024 00:35:55.585 read: IOPS=220, BW=883KiB/s (905kB/s)(8864KiB/10035msec) 00:35:55.585 slat (nsec): min=6745, max=41148, avg=10723.12, stdev=4413.31 00:35:55.585 clat (msec): min=28, max=158, avg=72.30, stdev=24.14 00:35:55.585 lat (msec): min=28, max=158, avg=72.31, stdev=24.14 00:35:55.585 clat percentiles (msec): 00:35:55.585 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 48], 00:35:55.585 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 74], 00:35:55.585 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 107], 95.00th=[ 120], 00:35:55.585 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 159], 99.95th=[ 159], 00:35:55.585 | 99.99th=[ 159] 00:35:55.585 bw ( KiB/s): min= 513, max= 1152, per=4.48%, avg=880.30, stdev=183.36, samples=20 00:35:55.585 iops : min= 128, max= 288, avg=220.00, stdev=45.82, samples=20 00:35:55.585 lat (msec) : 50=20.49%, 100=64.94%, 250=14.58% 00:35:55.585 cpu : usr=37.16%, sys=0.57%, ctx=1164, majf=0, minf=9 00:35:55.585 IO depths : 1=0.8%, 2=1.7%, 4=8.2%, 8=76.5%, 16=12.8%, 32=0.0%, >=64=0.0% 00:35:55.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.585 complete : 0=0.0%, 4=89.4%, 8=6.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.585 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:55.585 00:35:55.585 Run status group 0 (all jobs): 00:35:55.585 READ: bw=19.2MiB/s (20.1MB/s), 709KiB/s-997KiB/s (726kB/s-1021kB/s), io=193MiB (202MB), run=10002-10038msec 00:35:55.585 21:38:42 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:55.585 21:38:42 -- target/dif.sh@43 -- # local sub 00:35:55.585 21:38:42 -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.585 21:38:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.585 21:38:42 -- target/dif.sh@36 -- # local sub_id=0 00:35:55.585 21:38:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.585 21:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:42 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.585 21:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:42 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:42 -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.585 21:38:42 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:55.585 21:38:42 -- target/dif.sh@36 -- # local sub_id=1 00:35:55.585 21:38:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:55.585 21:38:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:42 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.585 21:38:43 -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:55.585 21:38:43 -- target/dif.sh@36 -- # local sub_id=2 00:35:55.585 21:38:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@115 -- # NULL_DIF=1 00:35:55.585 21:38:43 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:55.585 21:38:43 -- target/dif.sh@115 -- # numjobs=2 00:35:55.585 21:38:43 -- target/dif.sh@115 -- # iodepth=8 00:35:55.585 21:38:43 -- target/dif.sh@115 -- # runtime=5 00:35:55.585 21:38:43 -- target/dif.sh@115 -- # files=1 00:35:55.585 21:38:43 -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:55.585 21:38:43 -- target/dif.sh@28 -- # local sub 00:35:55.585 21:38:43 -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.585 21:38:43 -- target/dif.sh@31 -- # create_subsystem 0 00:35:55.585 21:38:43 -- target/dif.sh@18 -- # local sub_id=0 00:35:55.585 21:38:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 bdev_null0 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 [2024-04-26 21:38:43.094802] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.585 21:38:43 -- target/dif.sh@31 -- # create_subsystem 1 00:35:55.585 21:38:43 -- target/dif.sh@18 -- # local sub_id=1 00:35:55.585 21:38:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 bdev_null1 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.585 21:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:35:55.585 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:35:55.585 21:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:35:55.585 21:38:43 -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:55.585 21:38:43 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:55.585 21:38:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:55.585 21:38:43 -- nvmf/common.sh@521 -- # config=() 00:35:55.585 21:38:43 -- nvmf/common.sh@521 -- # local subsystem config 00:35:55.585 21:38:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.585 21:38:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:55.585 21:38:43 -- target/dif.sh@82 -- # gen_fio_conf 00:35:55.585 21:38:43 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.585 21:38:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:55.585 { 00:35:55.585 "params": { 00:35:55.585 "name": "Nvme$subsystem", 00:35:55.585 "trtype": "$TEST_TRANSPORT", 00:35:55.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.585 "adrfam": "ipv4", 00:35:55.585 "trsvcid": "$NVMF_PORT", 00:35:55.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.585 "hdgst": ${hdgst:-false}, 00:35:55.585 "ddgst": ${ddgst:-false} 00:35:55.585 }, 00:35:55.585 "method": "bdev_nvme_attach_controller" 00:35:55.585 } 00:35:55.585 EOF 00:35:55.585 )") 00:35:55.585 21:38:43 -- target/dif.sh@54 -- # local file 00:35:55.585 21:38:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:35:55.585 21:38:43 -- target/dif.sh@56 -- # cat 00:35:55.585 21:38:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:55.585 21:38:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:35:55.585 21:38:43 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:55.585 21:38:43 -- common/autotest_common.sh@1327 -- # shift 00:35:55.585 21:38:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:35:55.585 21:38:43 -- nvmf/common.sh@543 -- # cat 00:35:55.585 21:38:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.585 21:38:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:55.585 21:38:43 -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.586 21:38:43 -- target/dif.sh@73 -- # cat 00:35:55.586 21:38:43 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:55.586 21:38:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:35:55.586 21:38:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:55.586 21:38:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:35:55.586 21:38:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:35:55.586 { 00:35:55.586 "params": { 00:35:55.586 "name": "Nvme$subsystem", 00:35:55.586 "trtype": "$TEST_TRANSPORT", 00:35:55.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.586 "adrfam": "ipv4", 00:35:55.586 "trsvcid": "$NVMF_PORT", 00:35:55.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.586 "hdgst": ${hdgst:-false}, 00:35:55.586 "ddgst": ${ddgst:-false} 00:35:55.586 }, 00:35:55.586 "method": "bdev_nvme_attach_controller" 00:35:55.586 } 00:35:55.586 EOF 00:35:55.586 )") 00:35:55.586 21:38:43 -- target/dif.sh@72 -- # (( file++ )) 00:35:55.586 21:38:43 -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.586 21:38:43 -- nvmf/common.sh@543 -- # cat 00:35:55.586 21:38:43 -- nvmf/common.sh@545 -- # jq . 00:35:55.586 21:38:43 -- nvmf/common.sh@546 -- # IFS=, 00:35:55.586 21:38:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:35:55.586 "params": { 00:35:55.586 "name": "Nvme0", 00:35:55.586 "trtype": "tcp", 00:35:55.586 "traddr": "10.0.0.2", 00:35:55.586 "adrfam": "ipv4", 00:35:55.586 "trsvcid": "4420", 00:35:55.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.586 "hdgst": false, 00:35:55.586 "ddgst": false 00:35:55.586 }, 00:35:55.586 "method": "bdev_nvme_attach_controller" 00:35:55.586 },{ 00:35:55.586 "params": { 00:35:55.586 "name": "Nvme1", 00:35:55.586 "trtype": "tcp", 00:35:55.586 "traddr": "10.0.0.2", 00:35:55.586 "adrfam": "ipv4", 00:35:55.586 "trsvcid": "4420", 00:35:55.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:55.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:55.586 "hdgst": false, 00:35:55.586 "ddgst": false 00:35:55.586 }, 00:35:55.586 "method": "bdev_nvme_attach_controller" 00:35:55.586 }' 00:35:55.586 21:38:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:55.586 21:38:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:55.586 21:38:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.586 21:38:43 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:55.586 21:38:43 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:35:55.586 21:38:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:35:55.586 21:38:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:35:55.586 21:38:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:35:55.586 21:38:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:55.586 21:38:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.586 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:55.586 ... 00:35:55.586 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:55.586 ... 00:35:55.586 fio-3.35 00:35:55.586 Starting 4 threads 00:35:59.779 00:35:59.779 filename0: (groupid=0, jobs=1): err= 0: pid=109793: Fri Apr 26 21:38:49 2024 00:35:59.779 read: IOPS=2255, BW=17.6MiB/s (18.5MB/s)(88.2MiB/5004msec) 00:35:59.779 slat (nsec): min=5552, max=43654, avg=8220.31, stdev=3416.31 00:35:59.779 clat (usec): min=1392, max=4481, avg=3505.07, stdev=251.97 00:35:59.779 lat (usec): min=1400, max=4495, avg=3513.29, stdev=252.16 00:35:59.779 clat percentiles (usec): 00:35:59.779 | 1.00th=[ 3032], 5.00th=[ 3130], 10.00th=[ 3163], 20.00th=[ 3261], 00:35:59.779 | 30.00th=[ 3359], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3589], 00:35:59.779 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3785], 95.00th=[ 3851], 00:35:59.779 | 99.00th=[ 3982], 99.50th=[ 4080], 99.90th=[ 4293], 99.95th=[ 4359], 00:35:59.779 | 99.99th=[ 4424] 00:35:59.779 bw ( KiB/s): min=16896, max=19200, per=25.09%, avg=18090.67, stdev=889.12, samples=9 00:35:59.779 iops : min= 2112, max= 2400, avg=2261.33, stdev=111.14, samples=9 00:35:59.779 lat (msec) : 2=0.21%, 4=98.84%, 10=0.95% 00:35:59.779 cpu : usr=95.78%, sys=3.28%, ctx=5, majf=0, minf=0 00:35:59.779 IO depths : 1=9.2%, 2=25.0%, 4=50.0%, 8=15.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.779 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.779 issued rwts: total=11288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.779 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:59.779 filename0: (groupid=0, jobs=1): err= 0: pid=109794: Fri Apr 26 21:38:49 2024 00:35:59.779 read: IOPS=2252, BW=17.6MiB/s (18.5MB/s)(88.0MiB/5001msec) 00:35:59.779 slat (nsec): min=5982, max=46747, avg=14856.49, stdev=3675.82 00:35:59.779 clat (usec): min=1865, max=5843, avg=3484.11, stdev=287.14 00:35:59.779 lat (usec): min=1878, max=5870, avg=3498.96, stdev=287.63 00:35:59.779 clat percentiles (usec): 00:35:59.779 | 1.00th=[ 2737], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3228], 00:35:59.779 | 30.00th=[ 3326], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3556], 00:35:59.779 | 70.00th=[ 3621], 80.00th=[ 3720], 90.00th=[ 3785], 95.00th=[ 3851], 00:35:59.779 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 4948], 99.95th=[ 5604], 00:35:59.779 | 99.99th=[ 5669] 00:35:59.779 bw ( KiB/s): min=17024, max=19200, per=25.03%, avg=18044.00, stdev=888.84, samples=9 00:35:59.779 iops : min= 2128, max= 2400, avg=2255.44, stdev=111.13, samples=9 00:35:59.779 lat (msec) : 2=0.07%, 4=97.75%, 10=2.18% 00:35:59.779 cpu : usr=96.40%, sys=2.70%, ctx=10, majf=0, minf=0 00:35:59.779 IO depths : 1=9.3%, 2=25.0%, 4=50.0%, 8=15.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.779 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.779 issued rwts: total=11264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.779 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:59.779 filename1: (groupid=0, jobs=1): err= 0: pid=109795: Fri Apr 26 21:38:49 2024 00:35:59.779 read: IOPS=2252, BW=17.6MiB/s (18.5MB/s)(88.0MiB/5001msec) 00:35:59.779 slat (nsec): min=5655, max=55911, avg=14411.94, stdev=4012.90 00:35:59.779 clat (usec): min=1216, max=6446, avg=3484.51, stdev=380.24 00:35:59.779 lat (usec): min=1228, max=6454, avg=3498.92, stdev=380.59 00:35:59.779 clat percentiles (usec): 00:35:59.779 | 1.00th=[ 2409], 5.00th=[ 3064], 10.00th=[ 3130], 20.00th=[ 3228], 00:35:59.779 | 30.00th=[ 3326], 40.00th=[ 3425], 50.00th=[ 3490], 60.00th=[ 3556], 00:35:59.779 | 70.00th=[ 3621], 80.00th=[ 3720], 90.00th=[ 3785], 95.00th=[ 3851], 00:35:59.779 | 99.00th=[ 5080], 99.50th=[ 5473], 99.90th=[ 5997], 99.95th=[ 6194], 00:35:59.779 | 99.99th=[ 6390] 00:35:59.779 bw ( KiB/s): min=17008, max=19200, per=25.03%, avg=18048.00, stdev=886.85, samples=9 00:35:59.779 iops : min= 2126, max= 2400, avg=2256.00, stdev=110.86, samples=9 00:35:59.779 lat (msec) : 2=0.48%, 4=96.73%, 10=2.79% 00:35:59.779 cpu : usr=96.42%, sys=2.68%, ctx=9, majf=0, minf=9 00:35:59.779 IO depths : 1=8.3%, 2=25.0%, 4=50.0%, 8=16.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.779 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.779 issued rwts: total=11264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.779 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:59.779 filename1: (groupid=0, jobs=1): err= 0: pid=109796: Fri Apr 26 21:38:49 2024 00:35:59.779 read: IOPS=2255, BW=17.6MiB/s (18.5MB/s)(88.1MiB/5003msec) 00:35:59.779 slat (nsec): min=5880, max=45710, avg=12436.69, stdev=4122.61 00:35:59.779 clat (usec): min=1623, max=5104, avg=3493.76, stdev=291.44 00:35:59.779 lat (usec): min=1629, max=5118, avg=3506.20, stdev=291.63 00:35:59.779 clat percentiles (usec): 00:35:59.779 | 1.00th=[ 2638], 5.00th=[ 3097], 10.00th=[ 3130], 20.00th=[ 3261], 00:35:59.779 | 30.00th=[ 3359], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3589], 00:35:59.779 | 70.00th=[ 3654], 80.00th=[ 3720], 90.00th=[ 3818], 95.00th=[ 3851], 00:35:59.779 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 4686], 99.95th=[ 4752], 00:35:59.779 | 99.99th=[ 5080] 00:35:59.779 bw ( KiB/s): min=16912, max=19200, per=25.08%, avg=18085.67, stdev=889.78, samples=9 00:35:59.779 iops : min= 2114, max= 2400, avg=2260.67, stdev=111.23, samples=9 00:35:59.779 lat (msec) : 2=0.12%, 4=97.71%, 10=2.16% 00:35:59.779 cpu : usr=96.12%, sys=2.94%, ctx=16, majf=0, minf=0 00:35:59.779 IO depths : 1=9.3%, 2=24.5%, 4=50.5%, 8=15.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:59.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.779 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:59.779 issued rwts: total=11283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:59.779 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:59.779 00:35:59.779 Run status group 0 (all jobs): 00:35:59.779 READ: bw=70.4MiB/s (73.8MB/s), 17.6MiB/s-17.6MiB/s (18.5MB/s-18.5MB/s), io=352MiB (369MB), run=5001-5004msec 00:36:00.040 21:38:49 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:00.040 21:38:49 -- target/dif.sh@43 -- # local sub 00:36:00.040 21:38:49 -- target/dif.sh@45 -- # for sub in "$@" 00:36:00.040 21:38:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:00.040 21:38:49 -- target/dif.sh@36 -- # local sub_id=0 00:36:00.040 21:38:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:00.040 21:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:00.040 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.040 21:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:00.040 21:38:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:00.040 21:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:00.040 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.040 21:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:00.040 21:38:49 -- target/dif.sh@45 -- # for sub in "$@" 00:36:00.040 21:38:49 -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:00.040 21:38:49 -- target/dif.sh@36 -- # local sub_id=1 00:36:00.040 21:38:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:00.040 21:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:00.040 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.040 21:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:00.040 21:38:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:00.040 21:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:00.040 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.040 21:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:00.040 00:36:00.040 real 0m23.618s 00:36:00.040 user 2m8.357s 00:36:00.040 sys 0m3.533s 00:36:00.040 ************************************ 00:36:00.040 END TEST fio_dif_rand_params 00:36:00.040 ************************************ 00:36:00.040 21:38:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:00.040 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.040 21:38:49 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:00.040 21:38:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:00.040 21:38:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:00.040 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.309 ************************************ 00:36:00.309 START TEST fio_dif_digest 00:36:00.309 ************************************ 00:36:00.309 21:38:49 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:36:00.309 21:38:49 -- target/dif.sh@123 -- # local NULL_DIF 00:36:00.309 21:38:49 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:00.309 21:38:49 -- target/dif.sh@125 -- # local hdgst ddgst 00:36:00.309 21:38:49 -- target/dif.sh@127 -- # NULL_DIF=3 00:36:00.309 21:38:49 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:00.309 21:38:49 -- target/dif.sh@127 -- # numjobs=3 00:36:00.309 21:38:49 -- target/dif.sh@127 -- # iodepth=3 00:36:00.309 21:38:49 -- target/dif.sh@127 -- # runtime=10 00:36:00.309 21:38:49 -- target/dif.sh@128 -- # hdgst=true 00:36:00.309 21:38:49 -- target/dif.sh@128 -- # ddgst=true 00:36:00.309 21:38:49 -- target/dif.sh@130 -- # create_subsystems 0 00:36:00.309 21:38:49 -- target/dif.sh@28 -- # local sub 00:36:00.309 21:38:49 -- target/dif.sh@30 -- # for sub in "$@" 00:36:00.309 21:38:49 -- target/dif.sh@31 -- # create_subsystem 0 00:36:00.309 21:38:49 -- target/dif.sh@18 -- # local sub_id=0 00:36:00.309 21:38:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:00.309 21:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:00.309 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.309 bdev_null0 00:36:00.309 21:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:00.309 21:38:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:00.309 21:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:00.309 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.309 21:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:00.309 21:38:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:00.309 21:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:00.309 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.309 21:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:00.309 21:38:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:00.309 21:38:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:00.309 21:38:49 -- common/autotest_common.sh@10 -- # set +x 00:36:00.309 [2024-04-26 21:38:49.412144] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:00.309 21:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:00.309 21:38:49 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:00.309 21:38:49 -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:00.309 21:38:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:00.309 21:38:49 -- nvmf/common.sh@521 -- # config=() 00:36:00.309 21:38:49 -- nvmf/common.sh@521 -- # local subsystem config 00:36:00.309 21:38:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:36:00.309 21:38:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:36:00.309 { 00:36:00.309 "params": { 00:36:00.309 "name": "Nvme$subsystem", 00:36:00.309 "trtype": "$TEST_TRANSPORT", 00:36:00.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:00.309 "adrfam": "ipv4", 00:36:00.309 "trsvcid": "$NVMF_PORT", 00:36:00.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:00.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:00.309 "hdgst": ${hdgst:-false}, 00:36:00.309 "ddgst": ${ddgst:-false} 00:36:00.309 }, 00:36:00.309 "method": "bdev_nvme_attach_controller" 00:36:00.309 } 00:36:00.309 EOF 00:36:00.309 )") 00:36:00.309 21:38:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.309 21:38:49 -- target/dif.sh@82 -- # gen_fio_conf 00:36:00.309 21:38:49 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.309 21:38:49 -- target/dif.sh@54 -- # local file 00:36:00.309 21:38:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:36:00.309 21:38:49 -- target/dif.sh@56 -- # cat 00:36:00.309 21:38:49 -- nvmf/common.sh@543 -- # cat 00:36:00.309 21:38:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:00.309 21:38:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:36:00.309 21:38:49 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:00.309 21:38:49 -- common/autotest_common.sh@1327 -- # shift 00:36:00.309 21:38:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:36:00.309 21:38:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:36:00.309 21:38:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:36:00.309 21:38:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:00.309 21:38:49 -- target/dif.sh@72 -- # (( file <= files )) 00:36:00.309 21:38:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:36:00.309 21:38:49 -- nvmf/common.sh@545 -- # jq . 00:36:00.309 21:38:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:36:00.309 21:38:49 -- nvmf/common.sh@546 -- # IFS=, 00:36:00.309 21:38:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:36:00.309 "params": { 00:36:00.309 "name": "Nvme0", 00:36:00.309 "trtype": "tcp", 00:36:00.309 "traddr": "10.0.0.2", 00:36:00.309 "adrfam": "ipv4", 00:36:00.309 "trsvcid": "4420", 00:36:00.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:00.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:00.309 "hdgst": true, 00:36:00.309 "ddgst": true 00:36:00.309 }, 00:36:00.309 "method": "bdev_nvme_attach_controller" 00:36:00.309 }' 00:36:00.309 21:38:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:36:00.309 21:38:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:36:00.309 21:38:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:36:00.309 21:38:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:00.309 21:38:49 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:36:00.309 21:38:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:36:00.309 21:38:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:36:00.309 21:38:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:36:00.309 21:38:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:00.309 21:38:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:00.584 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:00.584 ... 00:36:00.584 fio-3.35 00:36:00.584 Starting 3 threads 00:36:12.817 00:36:12.817 filename0: (groupid=0, jobs=1): err= 0: pid=109906: Fri Apr 26 21:39:00 2024 00:36:12.817 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10004msec) 00:36:12.817 slat (nsec): min=5754, max=41731, avg=12313.31, stdev=3108.27 00:36:12.817 clat (usec): min=8445, max=36646, avg=15325.30, stdev=1764.37 00:36:12.817 lat (usec): min=8452, max=36659, avg=15337.61, stdev=1765.04 00:36:12.817 clat percentiles (usec): 00:36:12.817 | 1.00th=[ 9241], 5.00th=[12256], 10.00th=[13698], 20.00th=[14484], 00:36:12.817 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15533], 60.00th=[15795], 00:36:12.817 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:36:12.817 | 99.00th=[18220], 99.50th=[18744], 99.90th=[27657], 99.95th=[36439], 00:36:12.817 | 99.99th=[36439] 00:36:12.817 bw ( KiB/s): min=23040, max=29184, per=27.76%, avg=25101.47, stdev=1532.38, samples=19 00:36:12.817 iops : min= 180, max= 228, avg=196.11, stdev=11.97, samples=19 00:36:12.817 lat (msec) : 10=2.71%, 20=97.03%, 50=0.26% 00:36:12.817 cpu : usr=96.07%, sys=3.02%, ctx=26, majf=0, minf=9 00:36:12.817 IO depths : 1=5.6%, 2=94.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.818 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.818 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:12.818 filename0: (groupid=0, jobs=1): err= 0: pid=109907: Fri Apr 26 21:39:00 2024 00:36:12.818 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(309MiB/10003msec) 00:36:12.818 slat (nsec): min=6076, max=54948, avg=12234.48, stdev=3547.53 00:36:12.818 clat (usec): min=4855, max=29529, avg=12118.57, stdev=1687.29 00:36:12.818 lat (usec): min=4864, max=29542, avg=12130.81, stdev=1687.72 00:36:12.818 clat percentiles (usec): 00:36:12.818 | 1.00th=[ 6718], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11207], 00:36:12.818 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12518], 00:36:12.818 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13829], 95.00th=[14353], 00:36:12.818 | 99.00th=[15533], 99.50th=[16188], 99.90th=[28705], 99.95th=[28705], 00:36:12.818 | 99.99th=[29492] 00:36:12.818 bw ( KiB/s): min=28672, max=36681, per=35.02%, avg=31667.00, stdev=2137.76, samples=19 00:36:12.818 iops : min= 224, max= 286, avg=247.37, stdev=16.63, samples=19 00:36:12.818 lat (msec) : 10=6.39%, 20=93.49%, 50=0.12% 00:36:12.818 cpu : usr=95.55%, sys=3.39%, ctx=5, majf=0, minf=9 00:36:12.818 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.818 issued rwts: total=2473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.818 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:12.818 filename0: (groupid=0, jobs=1): err= 0: pid=109908: Fri Apr 26 21:39:00 2024 00:36:12.818 read: IOPS=263, BW=33.0MiB/s (34.6MB/s)(330MiB/10005msec) 00:36:12.818 slat (nsec): min=3714, max=54418, avg=13254.05, stdev=4360.74 00:36:12.818 clat (usec): min=5689, max=55996, avg=11356.13, stdev=3838.45 00:36:12.818 lat (usec): min=5692, max=56010, avg=11369.38, stdev=3838.51 00:36:12.818 clat percentiles (usec): 00:36:12.818 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:36:12.818 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:36:12.818 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:36:12.818 | 99.00th=[16188], 99.50th=[52167], 99.90th=[55313], 99.95th=[55313], 00:36:12.818 | 99.99th=[55837] 00:36:12.818 bw ( KiB/s): min=29440, max=36096, per=37.35%, avg=33778.53, stdev=1749.80, samples=19 00:36:12.818 iops : min= 230, max= 282, avg=263.89, stdev=13.67, samples=19 00:36:12.818 lat (msec) : 10=12.13%, 20=86.96%, 50=0.11%, 100=0.80% 00:36:12.818 cpu : usr=95.13%, sys=3.68%, ctx=15, majf=0, minf=0 00:36:12.818 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.818 issued rwts: total=2639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.818 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:12.818 00:36:12.818 Run status group 0 (all jobs): 00:36:12.818 READ: bw=88.3MiB/s (92.6MB/s), 24.4MiB/s-33.0MiB/s (25.6MB/s-34.6MB/s), io=884MiB (926MB), run=10003-10005msec 00:36:12.818 21:39:00 -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:12.818 21:39:00 -- target/dif.sh@43 -- # local sub 00:36:12.818 21:39:00 -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.818 21:39:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:12.818 21:39:00 -- target/dif.sh@36 -- # local sub_id=0 00:36:12.818 21:39:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:12.818 21:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:12.818 21:39:00 -- common/autotest_common.sh@10 -- # set +x 00:36:12.818 21:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:12.818 21:39:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:12.818 21:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:12.818 21:39:00 -- common/autotest_common.sh@10 -- # set +x 00:36:12.818 ************************************ 00:36:12.818 END TEST fio_dif_digest 00:36:12.818 ************************************ 00:36:12.818 21:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:12.818 00:36:12.818 real 0m10.941s 00:36:12.818 user 0m29.290s 00:36:12.818 sys 0m1.293s 00:36:12.818 21:39:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:12.818 21:39:00 -- common/autotest_common.sh@10 -- # set +x 00:36:12.818 21:39:00 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:12.818 21:39:00 -- target/dif.sh@147 -- # nvmftestfini 00:36:12.818 21:39:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:36:12.818 21:39:00 -- nvmf/common.sh@117 -- # sync 00:36:12.818 21:39:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:12.818 21:39:00 -- nvmf/common.sh@120 -- # set +e 00:36:12.818 21:39:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:12.818 21:39:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:12.818 rmmod nvme_tcp 00:36:12.818 rmmod nvme_fabrics 00:36:12.818 rmmod nvme_keyring 00:36:12.818 21:39:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:12.818 21:39:00 -- nvmf/common.sh@124 -- # set -e 00:36:12.818 21:39:00 -- nvmf/common.sh@125 -- # return 0 00:36:12.818 21:39:00 -- nvmf/common.sh@478 -- # '[' -n 109125 ']' 00:36:12.818 21:39:00 -- nvmf/common.sh@479 -- # killprocess 109125 00:36:12.818 21:39:00 -- common/autotest_common.sh@936 -- # '[' -z 109125 ']' 00:36:12.818 21:39:00 -- common/autotest_common.sh@940 -- # kill -0 109125 00:36:12.818 21:39:00 -- common/autotest_common.sh@941 -- # uname 00:36:12.818 21:39:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:12.818 21:39:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 109125 00:36:12.818 21:39:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:12.818 21:39:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:12.818 21:39:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 109125' 00:36:12.818 killing process with pid 109125 00:36:12.818 21:39:00 -- common/autotest_common.sh@955 -- # kill 109125 00:36:12.818 21:39:00 -- common/autotest_common.sh@960 -- # wait 109125 00:36:12.818 21:39:00 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:36:12.818 21:39:00 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:12.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:12.818 Waiting for block devices as requested 00:36:12.818 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:12.818 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:12.818 21:39:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:36:12.818 21:39:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:36:12.818 21:39:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:12.818 21:39:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:12.818 21:39:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.818 21:39:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:12.818 21:39:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.818 21:39:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:12.818 00:36:12.818 real 1m0.400s 00:36:12.818 user 3m56.225s 00:36:12.818 sys 0m11.586s 00:36:12.818 21:39:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:12.818 21:39:01 -- common/autotest_common.sh@10 -- # set +x 00:36:12.818 ************************************ 00:36:12.818 END TEST nvmf_dif 00:36:12.818 ************************************ 00:36:12.818 21:39:01 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:12.818 21:39:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:12.818 21:39:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:12.818 21:39:01 -- common/autotest_common.sh@10 -- # set +x 00:36:12.818 ************************************ 00:36:12.818 START TEST nvmf_abort_qd_sizes 00:36:12.818 ************************************ 00:36:12.818 21:39:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:12.818 * Looking for test storage... 00:36:12.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:12.818 21:39:01 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:12.818 21:39:01 -- nvmf/common.sh@7 -- # uname -s 00:36:12.818 21:39:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:12.818 21:39:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:12.818 21:39:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:12.818 21:39:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:12.818 21:39:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:12.818 21:39:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:12.818 21:39:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:12.818 21:39:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:12.818 21:39:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:12.818 21:39:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:12.818 21:39:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:36:12.818 21:39:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:36:12.818 21:39:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:12.818 21:39:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:12.818 21:39:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:12.818 21:39:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:12.818 21:39:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:12.818 21:39:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:12.818 21:39:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:12.818 21:39:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:12.818 21:39:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.819 21:39:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.819 21:39:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.819 21:39:01 -- paths/export.sh@5 -- # export PATH 00:36:12.819 21:39:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:12.819 21:39:01 -- nvmf/common.sh@47 -- # : 0 00:36:12.819 21:39:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:12.819 21:39:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:12.819 21:39:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:12.819 21:39:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:12.819 21:39:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:12.819 21:39:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:12.819 21:39:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:12.819 21:39:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:12.819 21:39:01 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:12.819 21:39:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:36:12.819 21:39:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:12.819 21:39:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:36:12.819 21:39:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:36:12.819 21:39:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:36:12.819 21:39:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.819 21:39:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:12.819 21:39:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:12.819 21:39:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:36:12.819 21:39:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:36:12.819 21:39:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:36:12.819 21:39:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:36:12.819 21:39:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:36:12.819 21:39:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:36:12.819 21:39:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:12.819 21:39:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:12.819 21:39:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:12.819 21:39:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:12.819 21:39:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:12.819 21:39:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:12.819 21:39:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:12.819 21:39:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:12.819 21:39:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:12.819 21:39:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:12.819 21:39:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:12.819 21:39:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:12.819 21:39:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:12.819 21:39:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:12.819 Cannot find device "nvmf_tgt_br" 00:36:12.819 21:39:01 -- nvmf/common.sh@155 -- # true 00:36:12.819 21:39:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:12.819 Cannot find device "nvmf_tgt_br2" 00:36:12.819 21:39:01 -- nvmf/common.sh@156 -- # true 00:36:12.819 21:39:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:12.819 21:39:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:12.819 Cannot find device "nvmf_tgt_br" 00:36:12.819 21:39:01 -- nvmf/common.sh@158 -- # true 00:36:12.819 21:39:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:12.819 Cannot find device "nvmf_tgt_br2" 00:36:12.819 21:39:01 -- nvmf/common.sh@159 -- # true 00:36:12.819 21:39:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:12.819 21:39:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:12.819 21:39:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:12.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:12.819 21:39:01 -- nvmf/common.sh@162 -- # true 00:36:12.819 21:39:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:12.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:12.819 21:39:01 -- nvmf/common.sh@163 -- # true 00:36:12.819 21:39:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:12.819 21:39:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:12.819 21:39:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:12.819 21:39:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:12.819 21:39:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:12.819 21:39:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:12.819 21:39:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:12.819 21:39:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:12.819 21:39:02 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:12.819 21:39:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:12.819 21:39:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:12.819 21:39:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:12.819 21:39:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:12.819 21:39:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:12.819 21:39:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:12.819 21:39:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:12.819 21:39:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:12.819 21:39:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:12.819 21:39:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:12.819 21:39:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:12.819 21:39:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:13.079 21:39:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:13.079 21:39:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:13.080 21:39:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:13.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:36:13.080 00:36:13.080 --- 10.0.0.2 ping statistics --- 00:36:13.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.080 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:36:13.080 21:39:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:13.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:13.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:36:13.080 00:36:13.080 --- 10.0.0.3 ping statistics --- 00:36:13.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.080 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:36:13.080 21:39:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:13.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:36:13.080 00:36:13.080 --- 10.0.0.1 ping statistics --- 00:36:13.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.080 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:36:13.080 21:39:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.080 21:39:02 -- nvmf/common.sh@422 -- # return 0 00:36:13.080 21:39:02 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:36:13.080 21:39:02 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:13.649 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:13.907 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:13.907 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:13.907 21:39:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.907 21:39:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:36:13.907 21:39:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:36:13.907 21:39:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.907 21:39:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:36:13.907 21:39:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:36:13.907 21:39:03 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:13.907 21:39:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:36:13.907 21:39:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:36:13.907 21:39:03 -- common/autotest_common.sh@10 -- # set +x 00:36:13.907 21:39:03 -- nvmf/common.sh@470 -- # nvmfpid=110502 00:36:13.907 21:39:03 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:13.907 21:39:03 -- nvmf/common.sh@471 -- # waitforlisten 110502 00:36:13.907 21:39:03 -- common/autotest_common.sh@817 -- # '[' -z 110502 ']' 00:36:13.907 21:39:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.907 21:39:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:13.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.907 21:39:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.907 21:39:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:13.907 21:39:03 -- common/autotest_common.sh@10 -- # set +x 00:36:13.907 [2024-04-26 21:39:03.150924] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:36:13.907 [2024-04-26 21:39:03.151007] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:14.166 [2024-04-26 21:39:03.292295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:14.166 [2024-04-26 21:39:03.343061] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:14.166 [2024-04-26 21:39:03.343105] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:14.166 [2024-04-26 21:39:03.343111] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:14.166 [2024-04-26 21:39:03.343116] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:14.166 [2024-04-26 21:39:03.343121] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:14.166 [2024-04-26 21:39:03.343356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.166 [2024-04-26 21:39:03.343540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:14.166 [2024-04-26 21:39:03.343719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:14.166 [2024-04-26 21:39:03.343725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:15.104 21:39:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:15.104 21:39:04 -- common/autotest_common.sh@850 -- # return 0 00:36:15.104 21:39:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:36:15.104 21:39:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:36:15.104 21:39:04 -- common/autotest_common.sh@10 -- # set +x 00:36:15.104 21:39:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.104 21:39:04 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:15.104 21:39:04 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:15.105 21:39:04 -- scripts/common.sh@309 -- # local bdf bdfs 00:36:15.105 21:39:04 -- scripts/common.sh@310 -- # local nvmes 00:36:15.105 21:39:04 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:36:15.105 21:39:04 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:36:15.105 21:39:04 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:36:15.105 21:39:04 -- scripts/common.sh@295 -- # local bdf= 00:36:15.105 21:39:04 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:36:15.105 21:39:04 -- scripts/common.sh@230 -- # local class 00:36:15.105 21:39:04 -- scripts/common.sh@231 -- # local subclass 00:36:15.105 21:39:04 -- scripts/common.sh@232 -- # local progif 00:36:15.105 21:39:04 -- scripts/common.sh@233 -- # printf %02x 1 00:36:15.105 21:39:04 -- scripts/common.sh@233 -- # class=01 00:36:15.105 21:39:04 -- scripts/common.sh@234 -- # printf %02x 8 00:36:15.105 21:39:04 -- scripts/common.sh@234 -- # subclass=08 00:36:15.105 21:39:04 -- scripts/common.sh@235 -- # printf %02x 2 00:36:15.105 21:39:04 -- scripts/common.sh@235 -- # progif=02 00:36:15.105 21:39:04 -- scripts/common.sh@237 -- # hash lspci 00:36:15.105 21:39:04 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:36:15.105 21:39:04 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:36:15.105 21:39:04 -- scripts/common.sh@240 -- # grep -i -- -p02 00:36:15.105 21:39:04 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:36:15.105 21:39:04 -- scripts/common.sh@242 -- # tr -d '"' 00:36:15.105 21:39:04 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:15.105 21:39:04 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:36:15.105 21:39:04 -- scripts/common.sh@15 -- # local i 00:36:15.105 21:39:04 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:36:15.105 21:39:04 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:36:15.105 21:39:04 -- scripts/common.sh@24 -- # return 0 00:36:15.105 21:39:04 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:36:15.105 21:39:04 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:15.105 21:39:04 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:36:15.105 21:39:04 -- scripts/common.sh@15 -- # local i 00:36:15.105 21:39:04 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:36:15.105 21:39:04 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:36:15.105 21:39:04 -- scripts/common.sh@24 -- # return 0 00:36:15.105 21:39:04 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:36:15.105 21:39:04 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:15.105 21:39:04 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:36:15.105 21:39:04 -- scripts/common.sh@320 -- # uname -s 00:36:15.105 21:39:04 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:15.105 21:39:04 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:15.105 21:39:04 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:15.105 21:39:04 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:36:15.105 21:39:04 -- scripts/common.sh@320 -- # uname -s 00:36:15.105 21:39:04 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:15.105 21:39:04 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:15.105 21:39:04 -- scripts/common.sh@325 -- # (( 2 )) 00:36:15.105 21:39:04 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:15.105 21:39:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:15.105 21:39:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:15.105 21:39:04 -- common/autotest_common.sh@10 -- # set +x 00:36:15.105 ************************************ 00:36:15.105 START TEST spdk_target_abort 00:36:15.105 ************************************ 00:36:15.105 21:39:04 -- common/autotest_common.sh@1111 -- # spdk_target 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:36:15.105 21:39:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:15.105 21:39:04 -- common/autotest_common.sh@10 -- # set +x 00:36:15.105 spdk_targetn1 00:36:15.105 21:39:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:15.105 21:39:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:15.105 21:39:04 -- common/autotest_common.sh@10 -- # set +x 00:36:15.105 [2024-04-26 21:39:04.305772] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.105 21:39:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:15.105 21:39:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:15.105 21:39:04 -- common/autotest_common.sh@10 -- # set +x 00:36:15.105 21:39:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:15.105 21:39:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:15.105 21:39:04 -- common/autotest_common.sh@10 -- # set +x 00:36:15.105 21:39:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:15.105 21:39:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:15.105 21:39:04 -- common/autotest_common.sh@10 -- # set +x 00:36:15.105 [2024-04-26 21:39:04.345845] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.105 21:39:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.105 21:39:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:15.364 21:39:04 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:15.365 21:39:04 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:15.365 21:39:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:15.365 21:39:04 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:18.651 Initializing NVMe Controllers 00:36:18.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:18.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:18.651 Initialization complete. Launching workers. 00:36:18.651 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12808, failed: 0 00:36:18.651 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1160, failed to submit 11648 00:36:18.651 success 731, unsuccess 429, failed 0 00:36:18.651 21:39:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:18.651 21:39:07 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.993 Initializing NVMe Controllers 00:36:21.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:21.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:21.993 Initialization complete. Launching workers. 00:36:21.993 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5948, failed: 0 00:36:21.993 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1265, failed to submit 4683 00:36:21.993 success 240, unsuccess 1025, failed 0 00:36:21.993 21:39:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:21.994 21:39:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:25.297 Initializing NVMe Controllers 00:36:25.297 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:25.297 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:25.297 Initialization complete. Launching workers. 00:36:25.297 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30790, failed: 0 00:36:25.297 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2798, failed to submit 27992 00:36:25.297 success 497, unsuccess 2301, failed 0 00:36:25.297 21:39:14 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:25.297 21:39:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:25.297 21:39:14 -- common/autotest_common.sh@10 -- # set +x 00:36:25.297 21:39:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:25.297 21:39:14 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:25.297 21:39:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:25.297 21:39:14 -- common/autotest_common.sh@10 -- # set +x 00:36:26.236 21:39:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:26.236 21:39:15 -- target/abort_qd_sizes.sh@61 -- # killprocess 110502 00:36:26.236 21:39:15 -- common/autotest_common.sh@936 -- # '[' -z 110502 ']' 00:36:26.236 21:39:15 -- common/autotest_common.sh@940 -- # kill -0 110502 00:36:26.236 21:39:15 -- common/autotest_common.sh@941 -- # uname 00:36:26.236 21:39:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:26.236 21:39:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 110502 00:36:26.236 21:39:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:26.236 21:39:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:26.236 21:39:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 110502' 00:36:26.236 killing process with pid 110502 00:36:26.236 21:39:15 -- common/autotest_common.sh@955 -- # kill 110502 00:36:26.236 21:39:15 -- common/autotest_common.sh@960 -- # wait 110502 00:36:26.496 00:36:26.496 real 0m11.407s 00:36:26.496 user 0m46.962s 00:36:26.496 sys 0m1.531s 00:36:26.496 21:39:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:26.496 21:39:15 -- common/autotest_common.sh@10 -- # set +x 00:36:26.496 ************************************ 00:36:26.496 END TEST spdk_target_abort 00:36:26.496 ************************************ 00:36:26.496 21:39:15 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:26.496 21:39:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:26.496 21:39:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:26.496 21:39:15 -- common/autotest_common.sh@10 -- # set +x 00:36:26.755 ************************************ 00:36:26.755 START TEST kernel_target_abort 00:36:26.755 ************************************ 00:36:26.755 21:39:15 -- common/autotest_common.sh@1111 -- # kernel_target 00:36:26.755 21:39:15 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:26.755 21:39:15 -- nvmf/common.sh@717 -- # local ip 00:36:26.755 21:39:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:36:26.755 21:39:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:36:26.755 21:39:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:26.755 21:39:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:26.755 21:39:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:36:26.755 21:39:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:26.755 21:39:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:36:26.755 21:39:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:36:26.755 21:39:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:36:26.755 21:39:15 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:26.755 21:39:15 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:26.755 21:39:15 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:36:26.755 21:39:15 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:26.755 21:39:15 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:26.755 21:39:15 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:26.755 21:39:15 -- nvmf/common.sh@628 -- # local block nvme 00:36:26.755 21:39:15 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:36:26.755 21:39:15 -- nvmf/common.sh@631 -- # modprobe nvmet 00:36:26.755 21:39:15 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:26.755 21:39:15 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:27.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:27.014 Waiting for block devices as requested 00:36:27.274 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:27.274 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:27.274 21:39:16 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:36:27.274 21:39:16 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:27.274 21:39:16 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:36:27.274 21:39:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:27.274 21:39:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:27.274 21:39:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:27.274 21:39:16 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:36:27.274 21:39:16 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:27.274 21:39:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:36:27.534 No valid GPT data, bailing 00:36:27.534 21:39:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:27.534 21:39:16 -- scripts/common.sh@391 -- # pt= 00:36:27.534 21:39:16 -- scripts/common.sh@392 -- # return 1 00:36:27.534 21:39:16 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:36:27.534 21:39:16 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:36:27.534 21:39:16 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:36:27.534 21:39:16 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:36:27.534 21:39:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:36:27.534 21:39:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:36:27.534 21:39:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:27.534 21:39:16 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:36:27.534 21:39:16 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:36:27.534 21:39:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:36:27.534 No valid GPT data, bailing 00:36:27.534 21:39:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:36:27.534 21:39:16 -- scripts/common.sh@391 -- # pt= 00:36:27.534 21:39:16 -- scripts/common.sh@392 -- # return 1 00:36:27.534 21:39:16 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:36:27.534 21:39:16 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:36:27.534 21:39:16 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:36:27.534 21:39:16 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:36:27.534 21:39:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:36:27.534 21:39:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:36:27.534 21:39:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:27.534 21:39:16 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:36:27.534 21:39:16 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:36:27.534 21:39:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:36:27.534 No valid GPT data, bailing 00:36:27.534 21:39:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:36:27.534 21:39:16 -- scripts/common.sh@391 -- # pt= 00:36:27.534 21:39:16 -- scripts/common.sh@392 -- # return 1 00:36:27.534 21:39:16 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:36:27.534 21:39:16 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:36:27.534 21:39:16 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:36:27.534 21:39:16 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:36:27.534 21:39:16 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:36:27.534 21:39:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:36:27.534 21:39:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:27.534 21:39:16 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:36:27.534 21:39:16 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:36:27.534 21:39:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:36:27.534 No valid GPT data, bailing 00:36:27.794 21:39:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:36:27.794 21:39:16 -- scripts/common.sh@391 -- # pt= 00:36:27.794 21:39:16 -- scripts/common.sh@392 -- # return 1 00:36:27.794 21:39:16 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:36:27.794 21:39:16 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:36:27.794 21:39:16 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:27.794 21:39:16 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:27.794 21:39:16 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:27.794 21:39:16 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:27.794 21:39:16 -- nvmf/common.sh@656 -- # echo 1 00:36:27.794 21:39:16 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:36:27.794 21:39:16 -- nvmf/common.sh@658 -- # echo 1 00:36:27.794 21:39:16 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:36:27.794 21:39:16 -- nvmf/common.sh@661 -- # echo tcp 00:36:27.795 21:39:16 -- nvmf/common.sh@662 -- # echo 4420 00:36:27.795 21:39:16 -- nvmf/common.sh@663 -- # echo ipv4 00:36:27.795 21:39:16 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:27.795 21:39:16 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca --hostid=684e36b6-186e-42df-9976-6b13930a8eca -a 10.0.0.1 -t tcp -s 4420 00:36:27.795 00:36:27.795 Discovery Log Number of Records 2, Generation counter 2 00:36:27.795 =====Discovery Log Entry 0====== 00:36:27.795 trtype: tcp 00:36:27.795 adrfam: ipv4 00:36:27.795 subtype: current discovery subsystem 00:36:27.795 treq: not specified, sq flow control disable supported 00:36:27.795 portid: 1 00:36:27.795 trsvcid: 4420 00:36:27.795 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:27.795 traddr: 10.0.0.1 00:36:27.795 eflags: none 00:36:27.795 sectype: none 00:36:27.795 =====Discovery Log Entry 1====== 00:36:27.795 trtype: tcp 00:36:27.795 adrfam: ipv4 00:36:27.795 subtype: nvme subsystem 00:36:27.795 treq: not specified, sq flow control disable supported 00:36:27.795 portid: 1 00:36:27.795 trsvcid: 4420 00:36:27.795 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:27.795 traddr: 10.0.0.1 00:36:27.795 eflags: none 00:36:27.795 sectype: none 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:27.795 21:39:16 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.105 Initializing NVMe Controllers 00:36:31.105 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:31.105 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:31.105 Initialization complete. Launching workers. 00:36:31.105 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41108, failed: 0 00:36:31.105 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41108, failed to submit 0 00:36:31.105 success 0, unsuccess 41108, failed 0 00:36:31.105 21:39:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:31.105 21:39:20 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:34.398 Initializing NVMe Controllers 00:36:34.398 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:34.398 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:34.398 Initialization complete. Launching workers. 00:36:34.398 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 83148, failed: 0 00:36:34.398 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37925, failed to submit 45223 00:36:34.398 success 0, unsuccess 37925, failed 0 00:36:34.398 21:39:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:34.398 21:39:23 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.691 Initializing NVMe Controllers 00:36:37.691 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:37.691 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:37.691 Initialization complete. Launching workers. 00:36:37.691 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102853, failed: 0 00:36:37.691 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25740, failed to submit 77113 00:36:37.691 success 0, unsuccess 25740, failed 0 00:36:37.691 21:39:26 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:37.691 21:39:26 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:37.691 21:39:26 -- nvmf/common.sh@675 -- # echo 0 00:36:37.691 21:39:26 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:37.691 21:39:26 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:37.691 21:39:26 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:37.691 21:39:26 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:37.691 21:39:26 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:36:37.691 21:39:26 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:36:37.691 21:39:26 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:37.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:41.241 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:41.241 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:41.501 00:36:41.501 real 0m14.761s 00:36:41.501 user 0m6.847s 00:36:41.501 sys 0m5.611s 00:36:41.501 21:39:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:41.501 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:36:41.501 ************************************ 00:36:41.501 END TEST kernel_target_abort 00:36:41.501 ************************************ 00:36:41.501 21:39:30 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:41.501 21:39:30 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:41.501 21:39:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:36:41.501 21:39:30 -- nvmf/common.sh@117 -- # sync 00:36:41.501 21:39:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:41.501 21:39:30 -- nvmf/common.sh@120 -- # set +e 00:36:41.501 21:39:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:41.501 21:39:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:41.501 rmmod nvme_tcp 00:36:41.501 rmmod nvme_fabrics 00:36:41.501 rmmod nvme_keyring 00:36:41.501 21:39:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:41.501 21:39:30 -- nvmf/common.sh@124 -- # set -e 00:36:41.501 21:39:30 -- nvmf/common.sh@125 -- # return 0 00:36:41.501 21:39:30 -- nvmf/common.sh@478 -- # '[' -n 110502 ']' 00:36:41.501 21:39:30 -- nvmf/common.sh@479 -- # killprocess 110502 00:36:41.501 21:39:30 -- common/autotest_common.sh@936 -- # '[' -z 110502 ']' 00:36:41.501 21:39:30 -- common/autotest_common.sh@940 -- # kill -0 110502 00:36:41.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (110502) - No such process 00:36:41.501 Process with pid 110502 is not found 00:36:41.501 21:39:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 110502 is not found' 00:36:41.501 21:39:30 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:36:41.501 21:39:30 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:42.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:42.070 Waiting for block devices as requested 00:36:42.070 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:42.329 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:42.329 21:39:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:36:42.329 21:39:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:36:42.329 21:39:31 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:42.329 21:39:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:42.329 21:39:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.329 21:39:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:42.329 21:39:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.329 21:39:31 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:42.329 00:36:42.329 real 0m29.842s 00:36:42.329 user 0m55.024s 00:36:42.329 sys 0m8.975s 00:36:42.329 21:39:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:42.329 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:36:42.329 ************************************ 00:36:42.329 END TEST nvmf_abort_qd_sizes 00:36:42.329 ************************************ 00:36:42.329 21:39:31 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:42.329 21:39:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:36:42.329 21:39:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:36:42.329 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:36:42.590 ************************************ 00:36:42.590 START TEST keyring_file 00:36:42.590 ************************************ 00:36:42.590 21:39:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:42.590 * Looking for test storage... 00:36:42.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:36:42.590 21:39:31 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:36:42.590 21:39:31 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:42.590 21:39:31 -- nvmf/common.sh@7 -- # uname -s 00:36:42.590 21:39:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.590 21:39:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.590 21:39:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.590 21:39:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.590 21:39:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.590 21:39:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.590 21:39:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.590 21:39:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.590 21:39:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.590 21:39:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.590 21:39:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:684e36b6-186e-42df-9976-6b13930a8eca 00:36:42.590 21:39:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=684e36b6-186e-42df-9976-6b13930a8eca 00:36:42.590 21:39:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.590 21:39:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.590 21:39:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:42.590 21:39:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.590 21:39:31 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:42.590 21:39:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.590 21:39:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.590 21:39:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.590 21:39:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.590 21:39:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.590 21:39:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.590 21:39:31 -- paths/export.sh@5 -- # export PATH 00:36:42.590 21:39:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.590 21:39:31 -- nvmf/common.sh@47 -- # : 0 00:36:42.590 21:39:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:42.590 21:39:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:42.590 21:39:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:42.590 21:39:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:42.590 21:39:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:42.590 21:39:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:42.590 21:39:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:42.590 21:39:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:42.590 21:39:31 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:42.590 21:39:31 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:42.590 21:39:31 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:42.590 21:39:31 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:42.590 21:39:31 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:42.590 21:39:31 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:42.590 21:39:31 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:42.590 21:39:31 -- keyring/common.sh@15 -- # local name key digest path 00:36:42.590 21:39:31 -- keyring/common.sh@17 -- # name=key0 00:36:42.590 21:39:31 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:42.590 21:39:31 -- keyring/common.sh@17 -- # digest=0 00:36:42.590 21:39:31 -- keyring/common.sh@18 -- # mktemp 00:36:42.590 21:39:31 -- keyring/common.sh@18 -- # path=/tmp/tmp.70EfR7POqm 00:36:42.590 21:39:31 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:42.590 21:39:31 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:42.590 21:39:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:36:42.590 21:39:31 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:36:42.590 21:39:31 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:36:42.590 21:39:31 -- nvmf/common.sh@693 -- # digest=0 00:36:42.590 21:39:31 -- nvmf/common.sh@694 -- # python - 00:36:42.590 21:39:31 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.70EfR7POqm 00:36:42.590 21:39:31 -- keyring/common.sh@23 -- # echo /tmp/tmp.70EfR7POqm 00:36:42.590 21:39:31 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.70EfR7POqm 00:36:42.590 21:39:31 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:42.590 21:39:31 -- keyring/common.sh@15 -- # local name key digest path 00:36:42.590 21:39:31 -- keyring/common.sh@17 -- # name=key1 00:36:42.590 21:39:31 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:42.590 21:39:31 -- keyring/common.sh@17 -- # digest=0 00:36:42.590 21:39:31 -- keyring/common.sh@18 -- # mktemp 00:36:42.590 21:39:31 -- keyring/common.sh@18 -- # path=/tmp/tmp.lvTDcFVTVE 00:36:42.590 21:39:31 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:42.590 21:39:31 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:42.590 21:39:31 -- nvmf/common.sh@691 -- # local prefix key digest 00:36:42.590 21:39:31 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:36:42.590 21:39:31 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:36:42.590 21:39:31 -- nvmf/common.sh@693 -- # digest=0 00:36:42.590 21:39:31 -- nvmf/common.sh@694 -- # python - 00:36:42.851 21:39:31 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lvTDcFVTVE 00:36:42.851 21:39:31 -- keyring/common.sh@23 -- # echo /tmp/tmp.lvTDcFVTVE 00:36:42.851 21:39:31 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lvTDcFVTVE 00:36:42.851 21:39:31 -- keyring/file.sh@30 -- # tgtpid=111453 00:36:42.851 21:39:31 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:42.851 21:39:31 -- keyring/file.sh@32 -- # waitforlisten 111453 00:36:42.851 21:39:31 -- common/autotest_common.sh@817 -- # '[' -z 111453 ']' 00:36:42.851 21:39:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:42.851 21:39:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:42.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:42.851 21:39:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:42.851 21:39:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:42.851 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:36:42.851 [2024-04-26 21:39:31.953118] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:36:42.851 [2024-04-26 21:39:31.953214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111453 ] 00:36:42.851 [2024-04-26 21:39:32.093157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.111 [2024-04-26 21:39:32.150196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:43.752 21:39:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:43.752 21:39:32 -- common/autotest_common.sh@850 -- # return 0 00:36:43.752 21:39:32 -- keyring/file.sh@33 -- # rpc_cmd 00:36:43.752 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:43.752 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:36:43.752 [2024-04-26 21:39:32.834446] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.752 null0 00:36:43.752 [2024-04-26 21:39:32.866362] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:43.752 [2024-04-26 21:39:32.866566] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:43.752 [2024-04-26 21:39:32.874378] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:43.752 21:39:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:36:43.752 21:39:32 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:43.752 21:39:32 -- common/autotest_common.sh@638 -- # local es=0 00:36:43.752 21:39:32 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:43.752 21:39:32 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:36:43.752 21:39:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:43.752 21:39:32 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:36:43.752 21:39:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:43.752 21:39:32 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:43.752 21:39:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:36:43.752 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:36:43.752 [2024-04-26 21:39:32.890326] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.2024/04/26 21:39:32 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:36:43.752 request: 00:36:43.752 { 00:36:43.752 "method": "nvmf_subsystem_add_listener", 00:36:43.752 "params": { 00:36:43.752 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:43.752 "secure_channel": false, 00:36:43.752 "listen_address": { 00:36:43.752 "trtype": "tcp", 00:36:43.752 "traddr": "127.0.0.1", 00:36:43.752 "trsvcid": "4420" 00:36:43.752 } 00:36:43.752 } 00:36:43.752 } 00:36:43.752 Got JSON-RPC error response 00:36:43.752 GoRPCClient: error on JSON-RPC call 00:36:43.752 21:39:32 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:36:43.752 21:39:32 -- common/autotest_common.sh@641 -- # es=1 00:36:43.752 21:39:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:36:43.752 21:39:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:36:43.752 21:39:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:36:43.752 21:39:32 -- keyring/file.sh@46 -- # bperfpid=111485 00:36:43.752 21:39:32 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:43.753 21:39:32 -- keyring/file.sh@48 -- # waitforlisten 111485 /var/tmp/bperf.sock 00:36:43.753 21:39:32 -- common/autotest_common.sh@817 -- # '[' -z 111485 ']' 00:36:43.753 21:39:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:43.753 21:39:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:43.753 21:39:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:43.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:43.753 21:39:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:43.753 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:36:43.753 [2024-04-26 21:39:32.950866] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:36:43.753 [2024-04-26 21:39:32.950933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111485 ] 00:36:44.024 [2024-04-26 21:39:33.075110] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.024 [2024-04-26 21:39:33.124646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.960 21:39:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:44.960 21:39:33 -- common/autotest_common.sh@850 -- # return 0 00:36:44.960 21:39:33 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.70EfR7POqm 00:36:44.960 21:39:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.70EfR7POqm 00:36:44.960 21:39:34 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lvTDcFVTVE 00:36:44.960 21:39:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lvTDcFVTVE 00:36:45.218 21:39:34 -- keyring/file.sh@51 -- # get_key key0 00:36:45.218 21:39:34 -- keyring/file.sh@51 -- # jq -r .path 00:36:45.218 21:39:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.218 21:39:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.218 21:39:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.476 21:39:34 -- keyring/file.sh@51 -- # [[ /tmp/tmp.70EfR7POqm == \/\t\m\p\/\t\m\p\.\7\0\E\f\R\7\P\O\q\m ]] 00:36:45.476 21:39:34 -- keyring/file.sh@52 -- # get_key key1 00:36:45.476 21:39:34 -- keyring/file.sh@52 -- # jq -r .path 00:36:45.476 21:39:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.476 21:39:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.476 21:39:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:45.736 21:39:34 -- keyring/file.sh@52 -- # [[ /tmp/tmp.lvTDcFVTVE == \/\t\m\p\/\t\m\p\.\l\v\T\D\c\F\V\T\V\E ]] 00:36:45.736 21:39:34 -- keyring/file.sh@53 -- # get_refcnt key0 00:36:45.736 21:39:34 -- keyring/common.sh@12 -- # get_key key0 00:36:45.736 21:39:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.736 21:39:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.736 21:39:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.736 21:39:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.736 21:39:34 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:45.736 21:39:34 -- keyring/file.sh@54 -- # get_refcnt key1 00:36:45.736 21:39:34 -- keyring/common.sh@12 -- # get_key key1 00:36:45.736 21:39:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.736 21:39:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.736 21:39:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.736 21:39:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:45.995 21:39:35 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:45.995 21:39:35 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.995 21:39:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.254 [2024-04-26 21:39:35.379204] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:46.254 nvme0n1 00:36:46.254 21:39:35 -- keyring/file.sh@59 -- # get_refcnt key0 00:36:46.254 21:39:35 -- keyring/common.sh@12 -- # get_key key0 00:36:46.254 21:39:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.254 21:39:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.254 21:39:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.254 21:39:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.513 21:39:35 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:46.513 21:39:35 -- keyring/file.sh@60 -- # get_refcnt key1 00:36:46.513 21:39:35 -- keyring/common.sh@12 -- # get_key key1 00:36:46.513 21:39:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.513 21:39:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.513 21:39:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.513 21:39:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:46.772 21:39:35 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:46.772 21:39:35 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:46.772 Running I/O for 1 seconds... 00:36:48.152 00:36:48.152 Latency(us) 00:36:48.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.152 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:48.152 nvme0n1 : 1.00 17011.31 66.45 0.00 0.00 7506.13 4006.57 18544.68 00:36:48.152 =================================================================================================================== 00:36:48.152 Total : 17011.31 66.45 0.00 0.00 7506.13 4006.57 18544.68 00:36:48.152 0 00:36:48.152 21:39:37 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:48.152 21:39:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:48.152 21:39:37 -- keyring/file.sh@65 -- # get_refcnt key0 00:36:48.152 21:39:37 -- keyring/common.sh@12 -- # get_key key0 00:36:48.152 21:39:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.152 21:39:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.152 21:39:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.152 21:39:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.417 21:39:37 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:48.417 21:39:37 -- keyring/file.sh@66 -- # get_refcnt key1 00:36:48.417 21:39:37 -- keyring/common.sh@12 -- # get_key key1 00:36:48.417 21:39:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.417 21:39:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.417 21:39:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.417 21:39:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:48.679 21:39:37 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:48.679 21:39:37 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:48.679 21:39:37 -- common/autotest_common.sh@638 -- # local es=0 00:36:48.679 21:39:37 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:48.679 21:39:37 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:36:48.679 21:39:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:48.679 21:39:37 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:36:48.679 21:39:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:48.679 21:39:37 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:48.679 21:39:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:48.679 [2024-04-26 21:39:37.869748] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:48.679 [2024-04-26 21:39:37.870139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1274df0 (107): Transport endpoint is not connected 00:36:48.679 [2024-04-26 21:39:37.871125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1274df0 (9): Bad file descriptor 00:36:48.679 [2024-04-26 21:39:37.872120] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:48.679 [2024-04-26 21:39:37.872143] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:48.679 [2024-04-26 21:39:37.872151] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:48.679 2024/04/26 21:39:37 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:36:48.679 request: 00:36:48.679 { 00:36:48.679 "method": "bdev_nvme_attach_controller", 00:36:48.679 "params": { 00:36:48.679 "name": "nvme0", 00:36:48.679 "trtype": "tcp", 00:36:48.679 "traddr": "127.0.0.1", 00:36:48.679 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:48.679 "adrfam": "ipv4", 00:36:48.679 "trsvcid": "4420", 00:36:48.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:48.679 "psk": "key1" 00:36:48.679 } 00:36:48.679 } 00:36:48.679 Got JSON-RPC error response 00:36:48.679 GoRPCClient: error on JSON-RPC call 00:36:48.679 21:39:37 -- common/autotest_common.sh@641 -- # es=1 00:36:48.679 21:39:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:36:48.679 21:39:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:36:48.679 21:39:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:36:48.679 21:39:37 -- keyring/file.sh@71 -- # get_refcnt key0 00:36:48.679 21:39:37 -- keyring/common.sh@12 -- # get_key key0 00:36:48.679 21:39:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.679 21:39:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.679 21:39:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:48.679 21:39:37 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.937 21:39:38 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:48.937 21:39:38 -- keyring/file.sh@72 -- # get_refcnt key1 00:36:48.937 21:39:38 -- keyring/common.sh@12 -- # get_key key1 00:36:48.937 21:39:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:48.937 21:39:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:48.937 21:39:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:48.937 21:39:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.196 21:39:38 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:49.196 21:39:38 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:49.196 21:39:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:49.455 21:39:38 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:49.455 21:39:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:49.714 21:39:38 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:49.714 21:39:38 -- keyring/file.sh@77 -- # jq length 00:36:49.714 21:39:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.714 21:39:38 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:49.714 21:39:38 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.70EfR7POqm 00:36:49.714 21:39:38 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.70EfR7POqm 00:36:49.714 21:39:38 -- common/autotest_common.sh@638 -- # local es=0 00:36:49.714 21:39:38 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.70EfR7POqm 00:36:49.714 21:39:38 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:36:49.714 21:39:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:49.714 21:39:38 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:36:49.714 21:39:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:49.714 21:39:38 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.70EfR7POqm 00:36:49.714 21:39:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.70EfR7POqm 00:36:49.973 [2024-04-26 21:39:39.142263] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.70EfR7POqm': 0100660 00:36:49.973 [2024-04-26 21:39:39.142323] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:49.973 2024/04/26 21:39:39 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.70EfR7POqm], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:36:49.973 request: 00:36:49.973 { 00:36:49.973 "method": "keyring_file_add_key", 00:36:49.973 "params": { 00:36:49.973 "name": "key0", 00:36:49.973 "path": "/tmp/tmp.70EfR7POqm" 00:36:49.973 } 00:36:49.973 } 00:36:49.973 Got JSON-RPC error response 00:36:49.973 GoRPCClient: error on JSON-RPC call 00:36:49.973 21:39:39 -- common/autotest_common.sh@641 -- # es=1 00:36:49.973 21:39:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:36:49.973 21:39:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:36:49.973 21:39:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:36:49.973 21:39:39 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.70EfR7POqm 00:36:49.973 21:39:39 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.70EfR7POqm 00:36:49.973 21:39:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.70EfR7POqm 00:36:50.231 21:39:39 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.70EfR7POqm 00:36:50.231 21:39:39 -- keyring/file.sh@88 -- # get_refcnt key0 00:36:50.231 21:39:39 -- keyring/common.sh@12 -- # get_key key0 00:36:50.231 21:39:39 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.231 21:39:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.231 21:39:39 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.231 21:39:39 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.488 21:39:39 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:50.488 21:39:39 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.488 21:39:39 -- common/autotest_common.sh@638 -- # local es=0 00:36:50.488 21:39:39 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.488 21:39:39 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:36:50.488 21:39:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:50.488 21:39:39 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:36:50.488 21:39:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:36:50.488 21:39:39 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.489 21:39:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:50.746 [2024-04-26 21:39:39.825073] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.70EfR7POqm': No such file or directory 00:36:50.746 [2024-04-26 21:39:39.825118] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:50.746 [2024-04-26 21:39:39.825140] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:50.746 [2024-04-26 21:39:39.825146] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:50.746 [2024-04-26 21:39:39.825152] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:50.746 2024/04/26 21:39:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:36:50.746 request: 00:36:50.746 { 00:36:50.746 "method": "bdev_nvme_attach_controller", 00:36:50.746 "params": { 00:36:50.746 "name": "nvme0", 00:36:50.746 "trtype": "tcp", 00:36:50.746 "traddr": "127.0.0.1", 00:36:50.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:50.746 "adrfam": "ipv4", 00:36:50.746 "trsvcid": "4420", 00:36:50.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.746 "psk": "key0" 00:36:50.746 } 00:36:50.746 } 00:36:50.746 Got JSON-RPC error response 00:36:50.746 GoRPCClient: error on JSON-RPC call 00:36:50.746 21:39:39 -- common/autotest_common.sh@641 -- # es=1 00:36:50.746 21:39:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:36:50.746 21:39:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:36:50.746 21:39:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:36:50.746 21:39:39 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:50.746 21:39:39 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:51.004 21:39:40 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:51.004 21:39:40 -- keyring/common.sh@15 -- # local name key digest path 00:36:51.004 21:39:40 -- keyring/common.sh@17 -- # name=key0 00:36:51.004 21:39:40 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:51.004 21:39:40 -- keyring/common.sh@17 -- # digest=0 00:36:51.004 21:39:40 -- keyring/common.sh@18 -- # mktemp 00:36:51.004 21:39:40 -- keyring/common.sh@18 -- # path=/tmp/tmp.Ub4Wh6rECb 00:36:51.004 21:39:40 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:51.004 21:39:40 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:51.004 21:39:40 -- nvmf/common.sh@691 -- # local prefix key digest 00:36:51.004 21:39:40 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:36:51.004 21:39:40 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:36:51.004 21:39:40 -- nvmf/common.sh@693 -- # digest=0 00:36:51.004 21:39:40 -- nvmf/common.sh@694 -- # python - 00:36:51.004 21:39:40 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ub4Wh6rECb 00:36:51.004 21:39:40 -- keyring/common.sh@23 -- # echo /tmp/tmp.Ub4Wh6rECb 00:36:51.004 21:39:40 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Ub4Wh6rECb 00:36:51.004 21:39:40 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ub4Wh6rECb 00:36:51.004 21:39:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ub4Wh6rECb 00:36:51.263 21:39:40 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.263 21:39:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:51.521 nvme0n1 00:36:51.521 21:39:40 -- keyring/file.sh@99 -- # get_refcnt key0 00:36:51.521 21:39:40 -- keyring/common.sh@12 -- # get_key key0 00:36:51.521 21:39:40 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:51.521 21:39:40 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:51.521 21:39:40 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:51.521 21:39:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:51.780 21:39:40 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:51.780 21:39:40 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:51.780 21:39:40 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:52.038 21:39:41 -- keyring/file.sh@101 -- # get_key key0 00:36:52.038 21:39:41 -- keyring/file.sh@101 -- # jq -r .removed 00:36:52.038 21:39:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.038 21:39:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.038 21:39:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.297 21:39:41 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:52.297 21:39:41 -- keyring/file.sh@102 -- # get_refcnt key0 00:36:52.297 21:39:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.297 21:39:41 -- keyring/common.sh@12 -- # get_key key0 00:36:52.297 21:39:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.297 21:39:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.297 21:39:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.554 21:39:41 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:52.554 21:39:41 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:52.554 21:39:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:52.554 21:39:41 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:52.554 21:39:41 -- keyring/file.sh@104 -- # jq length 00:36:52.554 21:39:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.813 21:39:41 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:52.813 21:39:41 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ub4Wh6rECb 00:36:52.813 21:39:41 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ub4Wh6rECb 00:36:53.072 21:39:42 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lvTDcFVTVE 00:36:53.072 21:39:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lvTDcFVTVE 00:36:53.331 21:39:42 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.331 21:39:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.590 nvme0n1 00:36:53.590 21:39:42 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:53.590 21:39:42 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:53.848 21:39:42 -- keyring/file.sh@112 -- # config='{ 00:36:53.848 "subsystems": [ 00:36:53.848 { 00:36:53.848 "subsystem": "keyring", 00:36:53.848 "config": [ 00:36:53.848 { 00:36:53.848 "method": "keyring_file_add_key", 00:36:53.848 "params": { 00:36:53.848 "name": "key0", 00:36:53.848 "path": "/tmp/tmp.Ub4Wh6rECb" 00:36:53.848 } 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "method": "keyring_file_add_key", 00:36:53.848 "params": { 00:36:53.848 "name": "key1", 00:36:53.848 "path": "/tmp/tmp.lvTDcFVTVE" 00:36:53.848 } 00:36:53.848 } 00:36:53.848 ] 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "subsystem": "iobuf", 00:36:53.848 "config": [ 00:36:53.848 { 00:36:53.848 "method": "iobuf_set_options", 00:36:53.848 "params": { 00:36:53.848 "large_bufsize": 135168, 00:36:53.848 "large_pool_count": 1024, 00:36:53.848 "small_bufsize": 8192, 00:36:53.848 "small_pool_count": 8192 00:36:53.848 } 00:36:53.848 } 00:36:53.848 ] 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "subsystem": "sock", 00:36:53.848 "config": [ 00:36:53.848 { 00:36:53.848 "method": "sock_impl_set_options", 00:36:53.848 "params": { 00:36:53.848 "enable_ktls": false, 00:36:53.848 "enable_placement_id": 0, 00:36:53.848 "enable_quickack": false, 00:36:53.848 "enable_recv_pipe": true, 00:36:53.848 "enable_zerocopy_send_client": false, 00:36:53.848 "enable_zerocopy_send_server": true, 00:36:53.848 "impl_name": "posix", 00:36:53.848 "recv_buf_size": 2097152, 00:36:53.848 "send_buf_size": 2097152, 00:36:53.848 "tls_version": 0, 00:36:53.848 "zerocopy_threshold": 0 00:36:53.848 } 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "method": "sock_impl_set_options", 00:36:53.848 "params": { 00:36:53.848 "enable_ktls": false, 00:36:53.848 "enable_placement_id": 0, 00:36:53.848 "enable_quickack": false, 00:36:53.848 "enable_recv_pipe": true, 00:36:53.848 "enable_zerocopy_send_client": false, 00:36:53.848 "enable_zerocopy_send_server": true, 00:36:53.848 "impl_name": "ssl", 00:36:53.848 "recv_buf_size": 4096, 00:36:53.848 "send_buf_size": 4096, 00:36:53.848 "tls_version": 0, 00:36:53.848 "zerocopy_threshold": 0 00:36:53.848 } 00:36:53.848 } 00:36:53.848 ] 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "subsystem": "vmd", 00:36:53.848 "config": [] 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "subsystem": "accel", 00:36:53.848 "config": [ 00:36:53.848 { 00:36:53.848 "method": "accel_set_options", 00:36:53.848 "params": { 00:36:53.848 "buf_count": 2048, 00:36:53.848 "large_cache_size": 16, 00:36:53.848 "sequence_count": 2048, 00:36:53.848 "small_cache_size": 128, 00:36:53.848 "task_count": 2048 00:36:53.848 } 00:36:53.848 } 00:36:53.848 ] 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "subsystem": "bdev", 00:36:53.848 "config": [ 00:36:53.848 { 00:36:53.848 "method": "bdev_set_options", 00:36:53.848 "params": { 00:36:53.848 "bdev_auto_examine": true, 00:36:53.848 "bdev_io_cache_size": 256, 00:36:53.848 "bdev_io_pool_size": 65535, 00:36:53.848 "iobuf_large_cache_size": 16, 00:36:53.848 "iobuf_small_cache_size": 128 00:36:53.848 } 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "method": "bdev_raid_set_options", 00:36:53.848 "params": { 00:36:53.848 "process_window_size_kb": 1024 00:36:53.848 } 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "method": "bdev_iscsi_set_options", 00:36:53.848 "params": { 00:36:53.848 "timeout_sec": 30 00:36:53.848 } 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "method": "bdev_nvme_set_options", 00:36:53.848 "params": { 00:36:53.848 "action_on_timeout": "none", 00:36:53.848 "allow_accel_sequence": false, 00:36:53.848 "arbitration_burst": 0, 00:36:53.848 "bdev_retry_count": 3, 00:36:53.848 "ctrlr_loss_timeout_sec": 0, 00:36:53.848 "delay_cmd_submit": true, 00:36:53.848 "dhchap_dhgroups": [ 00:36:53.848 "null", 00:36:53.848 "ffdhe2048", 00:36:53.848 "ffdhe3072", 00:36:53.848 "ffdhe4096", 00:36:53.848 "ffdhe6144", 00:36:53.848 "ffdhe8192" 00:36:53.848 ], 00:36:53.848 "dhchap_digests": [ 00:36:53.848 "sha256", 00:36:53.848 "sha384", 00:36:53.848 "sha512" 00:36:53.848 ], 00:36:53.848 "disable_auto_failback": false, 00:36:53.848 "fast_io_fail_timeout_sec": 0, 00:36:53.848 "generate_uuids": false, 00:36:53.848 "high_priority_weight": 0, 00:36:53.848 "io_path_stat": false, 00:36:53.848 "io_queue_requests": 512, 00:36:53.848 "keep_alive_timeout_ms": 10000, 00:36:53.848 "low_priority_weight": 0, 00:36:53.848 "medium_priority_weight": 0, 00:36:53.848 "nvme_adminq_poll_period_us": 10000, 00:36:53.848 "nvme_error_stat": false, 00:36:53.848 "nvme_ioq_poll_period_us": 0, 00:36:53.848 "rdma_cm_event_timeout_ms": 0, 00:36:53.848 "rdma_max_cq_size": 0, 00:36:53.848 "rdma_srq_size": 0, 00:36:53.848 "reconnect_delay_sec": 0, 00:36:53.848 "timeout_admin_us": 0, 00:36:53.848 "timeout_us": 0, 00:36:53.848 "transport_ack_timeout": 0, 00:36:53.848 "transport_retry_count": 4, 00:36:53.848 "transport_tos": 0 00:36:53.848 } 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "method": "bdev_nvme_attach_controller", 00:36:53.848 "params": { 00:36:53.848 "adrfam": "IPv4", 00:36:53.848 "ctrlr_loss_timeout_sec": 0, 00:36:53.848 "ddgst": false, 00:36:53.848 "fast_io_fail_timeout_sec": 0, 00:36:53.848 "hdgst": false, 00:36:53.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.848 "name": "nvme0", 00:36:53.848 "prchk_guard": false, 00:36:53.848 "prchk_reftag": false, 00:36:53.848 "psk": "key0", 00:36:53.848 "reconnect_delay_sec": 0, 00:36:53.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.848 "traddr": "127.0.0.1", 00:36:53.848 "trsvcid": "4420", 00:36:53.848 "trtype": "TCP" 00:36:53.848 } 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "method": "bdev_nvme_set_hotplug", 00:36:53.848 "params": { 00:36:53.848 "enable": false, 00:36:53.848 "period_us": 100000 00:36:53.848 } 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "method": "bdev_wait_for_examine" 00:36:53.848 } 00:36:53.848 ] 00:36:53.848 }, 00:36:53.848 { 00:36:53.848 "subsystem": "nbd", 00:36:53.848 "config": [] 00:36:53.848 } 00:36:53.848 ] 00:36:53.848 }' 00:36:53.848 21:39:42 -- keyring/file.sh@114 -- # killprocess 111485 00:36:53.848 21:39:42 -- common/autotest_common.sh@936 -- # '[' -z 111485 ']' 00:36:53.848 21:39:42 -- common/autotest_common.sh@940 -- # kill -0 111485 00:36:53.848 21:39:42 -- common/autotest_common.sh@941 -- # uname 00:36:53.848 21:39:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:53.848 21:39:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111485 00:36:53.848 21:39:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:36:53.848 killing process with pid 111485 00:36:53.848 Received shutdown signal, test time was about 1.000000 seconds 00:36:53.848 00:36:53.848 Latency(us) 00:36:53.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:53.848 =================================================================================================================== 00:36:53.848 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:53.848 21:39:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:36:53.848 21:39:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111485' 00:36:53.848 21:39:43 -- common/autotest_common.sh@955 -- # kill 111485 00:36:53.848 21:39:43 -- common/autotest_common.sh@960 -- # wait 111485 00:36:54.107 21:39:43 -- keyring/file.sh@117 -- # bperfpid=111946 00:36:54.107 21:39:43 -- keyring/file.sh@119 -- # waitforlisten 111946 /var/tmp/bperf.sock 00:36:54.107 21:39:43 -- common/autotest_common.sh@817 -- # '[' -z 111946 ']' 00:36:54.107 21:39:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:54.107 21:39:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:36:54.107 21:39:43 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:54.107 21:39:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:54.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:54.107 21:39:43 -- keyring/file.sh@115 -- # echo '{ 00:36:54.107 "subsystems": [ 00:36:54.107 { 00:36:54.107 "subsystem": "keyring", 00:36:54.107 "config": [ 00:36:54.107 { 00:36:54.107 "method": "keyring_file_add_key", 00:36:54.107 "params": { 00:36:54.107 "name": "key0", 00:36:54.107 "path": "/tmp/tmp.Ub4Wh6rECb" 00:36:54.107 } 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "method": "keyring_file_add_key", 00:36:54.107 "params": { 00:36:54.107 "name": "key1", 00:36:54.107 "path": "/tmp/tmp.lvTDcFVTVE" 00:36:54.107 } 00:36:54.107 } 00:36:54.107 ] 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "subsystem": "iobuf", 00:36:54.107 "config": [ 00:36:54.107 { 00:36:54.107 "method": "iobuf_set_options", 00:36:54.107 "params": { 00:36:54.107 "large_bufsize": 135168, 00:36:54.107 "large_pool_count": 1024, 00:36:54.107 "small_bufsize": 8192, 00:36:54.107 "small_pool_count": 8192 00:36:54.107 } 00:36:54.107 } 00:36:54.107 ] 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "subsystem": "sock", 00:36:54.107 "config": [ 00:36:54.107 { 00:36:54.107 "method": "sock_impl_set_options", 00:36:54.107 "params": { 00:36:54.107 "enable_ktls": false, 00:36:54.107 "enable_placement_id": 0, 00:36:54.107 "enable_quickack": false, 00:36:54.107 "enable_recv_pipe": true, 00:36:54.107 "enable_zerocopy_send_client": false, 00:36:54.107 "enable_zerocopy_send_server": true, 00:36:54.107 "impl_name": "posix", 00:36:54.107 "recv_buf_size": 2097152, 00:36:54.107 "send_buf_size": 2097152, 00:36:54.107 "tls_version": 0, 00:36:54.107 "zerocopy_threshold": 0 00:36:54.107 } 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "method": "sock_impl_set_options", 00:36:54.107 "params": { 00:36:54.107 "enable_ktls": false, 00:36:54.107 "enable_placement_id": 0, 00:36:54.107 "enable_quickack": false, 00:36:54.107 "enable_recv_pipe": true, 00:36:54.107 "enable_zerocopy_send_client": false, 00:36:54.107 "enable_zerocopy_send_server": true, 00:36:54.107 "impl_name": "ssl", 00:36:54.107 "recv_buf_size": 4096, 00:36:54.107 "send_buf_size": 4096, 00:36:54.107 "tls_version": 0, 00:36:54.107 "zerocopy_threshold": 0 00:36:54.107 } 00:36:54.107 } 00:36:54.107 ] 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "subsystem": "vmd", 00:36:54.107 "config": [] 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "subsystem": "accel", 00:36:54.107 "config": [ 00:36:54.107 { 00:36:54.107 "method": "accel_set_options", 00:36:54.107 "params": { 00:36:54.107 "buf_count": 2048, 00:36:54.107 "large_cache_size": 16, 00:36:54.107 "sequence_count": 2048, 00:36:54.107 "small_cache_size": 128, 00:36:54.107 "task_count": 2048 00:36:54.107 } 00:36:54.107 } 00:36:54.107 ] 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "subsystem": "bdev", 00:36:54.107 "config": [ 00:36:54.107 { 00:36:54.107 "method": "bdev_set_options", 00:36:54.107 "params": { 00:36:54.107 "bdev_auto_examine": true, 00:36:54.107 "bdev_io_cache_size": 256, 00:36:54.107 "bdev_io_pool_size": 65535, 00:36:54.107 "iobuf_large_cache_size": 16, 00:36:54.107 "iobuf_small_cache_size": 128 00:36:54.107 } 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "method": "bdev_raid_set_options", 00:36:54.107 "params": { 00:36:54.107 "process_window_size_kb": 1024 00:36:54.107 } 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "method": "bdev_iscsi_set_options", 00:36:54.107 "params": { 00:36:54.107 "timeout_sec": 30 00:36:54.107 } 00:36:54.107 }, 00:36:54.107 { 00:36:54.107 "method": "bdev_nvme_set_options", 00:36:54.107 "params": { 00:36:54.107 "action_on_timeout": "none", 00:36:54.107 "allow_accel_sequence": false, 00:36:54.107 "arbitration_burst": 0, 00:36:54.107 "bdev_retry_count": 3, 00:36:54.107 "ctrlr_loss_timeout_sec": 0, 00:36:54.107 "delay_cmd_submit": true, 00:36:54.107 "dhchap_dhgroups": [ 00:36:54.107 "null", 00:36:54.107 "ffdhe2048", 00:36:54.107 "ffdhe3072", 00:36:54.107 "ffdhe4096", 00:36:54.107 "ffdhe6144", 00:36:54.107 "ffdhe8192" 00:36:54.107 ], 00:36:54.108 "dhchap_digests": [ 00:36:54.108 "sha256", 00:36:54.108 "sha384", 00:36:54.108 "sha512" 00:36:54.108 ], 00:36:54.108 "disable_auto_failback": false, 00:36:54.108 "fast_io_fail_timeout_sec": 0, 00:36:54.108 "generate_uuids": false, 00:36:54.108 "high_priority_weight": 0, 00:36:54.108 "io_path_stat": false, 00:36:54.108 "io_queue_requests": 512, 00:36:54.108 "keep_alive_timeout_ms": 10000, 00:36:54.108 "low_priority_weight": 0, 00:36:54.108 "medium_priority_weight": 0, 00:36:54.108 "nvme_adminq_poll_period_us": 10000, 00:36:54.108 "nvme_error_stat": false, 00:36:54.108 "nvme_ioq_poll_period_us": 0, 00:36:54.108 "rdma_cm_event_timeout_ms": 0, 00:36:54.108 "rdma_max_cq_size": 0, 00:36:54.108 "rdma_srq_size": 0, 00:36:54.108 "reconnect_delay_sec": 0, 00:36:54.108 "timeout_admin_us": 0, 00:36:54.108 "timeout_us": 0, 00:36:54.108 "transport_ack_timeout": 0, 00:36:54.108 "transport_retry_count": 4, 00:36:54.108 "transport_tos": 0 00:36:54.108 } 00:36:54.108 }, 00:36:54.108 { 00:36:54.108 "method": "bdev_nvme_attach_controller", 00:36:54.108 "params": { 00:36:54.108 "adrfam": "IPv4", 00:36:54.108 "ctrlr_loss_timeout_sec": 0, 00:36:54.108 "ddgst": false, 00:36:54.108 "fast_io_fail_timeout_sec": 0, 00:36:54.108 "hdgst": false, 00:36:54.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:54.108 "name": "nvme0", 00:36:54.108 "prchk_guard": false, 00:36:54.108 "prchk_reftag": false, 00:36:54.108 "psk": "key0", 00:36:54.108 "reconnect_delay_sec": 0, 00:36:54.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:54.108 "traddr": "127.0.0.1", 00:36:54.108 "trsvcid": "4420", 00:36:54.108 "trtype": "TCP" 00:36:54.108 } 00:36:54.108 }, 00:36:54.108 { 00:36:54.108 "method": "bdev_nvme_set_hotplug", 00:36:54.108 "params": { 00:36:54.108 "enable": false, 00:36:54.108 "period_us": 100000 00:36:54.108 } 00:36:54.108 }, 00:36:54.108 { 00:36:54.108 "method": "bdev_wait_for_examine" 00:36:54.108 } 00:36:54.108 ] 00:36:54.108 }, 00:36:54.108 { 00:36:54.108 "subsystem": "nbd", 00:36:54.108 "config": [] 00:36:54.108 } 00:36:54.108 ] 00:36:54.108 }' 00:36:54.108 21:39:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:36:54.108 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:36:54.108 [2024-04-26 21:39:43.255174] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:36:54.108 [2024-04-26 21:39:43.255299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111946 ] 00:36:54.367 [2024-04-26 21:39:43.393873] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.367 [2024-04-26 21:39:43.447147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:54.367 [2024-04-26 21:39:43.594022] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:54.936 21:39:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:36:54.936 21:39:44 -- common/autotest_common.sh@850 -- # return 0 00:36:54.936 21:39:44 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:54.936 21:39:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.936 21:39:44 -- keyring/file.sh@120 -- # jq length 00:36:55.196 21:39:44 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:55.196 21:39:44 -- keyring/file.sh@121 -- # get_refcnt key0 00:36:55.196 21:39:44 -- keyring/common.sh@12 -- # get_key key0 00:36:55.196 21:39:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.196 21:39:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.196 21:39:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.196 21:39:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.455 21:39:44 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:55.455 21:39:44 -- keyring/file.sh@122 -- # get_refcnt key1 00:36:55.455 21:39:44 -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.455 21:39:44 -- keyring/common.sh@12 -- # get_key key1 00:36:55.455 21:39:44 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.455 21:39:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.455 21:39:44 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:55.714 21:39:44 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:55.714 21:39:44 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:55.714 21:39:44 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:55.714 21:39:44 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:55.974 21:39:44 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:55.974 21:39:44 -- keyring/file.sh@1 -- # cleanup 00:36:55.974 21:39:44 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Ub4Wh6rECb /tmp/tmp.lvTDcFVTVE 00:36:55.974 21:39:44 -- keyring/file.sh@20 -- # killprocess 111946 00:36:55.974 21:39:44 -- common/autotest_common.sh@936 -- # '[' -z 111946 ']' 00:36:55.974 21:39:44 -- common/autotest_common.sh@940 -- # kill -0 111946 00:36:55.974 21:39:44 -- common/autotest_common.sh@941 -- # uname 00:36:55.974 21:39:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:55.974 21:39:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111946 00:36:55.974 21:39:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:36:55.974 21:39:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:36:55.974 killing process with pid 111946 00:36:55.974 21:39:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111946' 00:36:55.974 21:39:45 -- common/autotest_common.sh@955 -- # kill 111946 00:36:55.974 Received shutdown signal, test time was about 1.000000 seconds 00:36:55.974 00:36:55.974 Latency(us) 00:36:55.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.974 =================================================================================================================== 00:36:55.974 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:55.974 21:39:45 -- common/autotest_common.sh@960 -- # wait 111946 00:36:55.974 21:39:45 -- keyring/file.sh@21 -- # killprocess 111453 00:36:55.974 21:39:45 -- common/autotest_common.sh@936 -- # '[' -z 111453 ']' 00:36:55.974 21:39:45 -- common/autotest_common.sh@940 -- # kill -0 111453 00:36:55.974 21:39:45 -- common/autotest_common.sh@941 -- # uname 00:36:55.974 21:39:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:36:55.974 21:39:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 111453 00:36:56.233 21:39:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:36:56.233 21:39:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:36:56.233 killing process with pid 111453 00:36:56.233 21:39:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 111453' 00:36:56.233 21:39:45 -- common/autotest_common.sh@955 -- # kill 111453 00:36:56.233 [2024-04-26 21:39:45.240512] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:56.234 21:39:45 -- common/autotest_common.sh@960 -- # wait 111453 00:36:56.494 00:36:56.494 real 0m13.950s 00:36:56.494 user 0m34.069s 00:36:56.494 sys 0m3.080s 00:36:56.494 21:39:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:36:56.494 21:39:45 -- common/autotest_common.sh@10 -- # set +x 00:36:56.494 ************************************ 00:36:56.494 END TEST keyring_file 00:36:56.494 ************************************ 00:36:56.494 21:39:45 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:36:56.494 21:39:45 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:36:56.494 21:39:45 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:36:56.494 21:39:45 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:36:56.494 21:39:45 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:36:56.494 21:39:45 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:36:56.494 21:39:45 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:36:56.494 21:39:45 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:36:56.494 21:39:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:36:56.494 21:39:45 -- common/autotest_common.sh@10 -- # set +x 00:36:56.494 21:39:45 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:36:56.494 21:39:45 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:36:56.494 21:39:45 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:36:56.494 21:39:45 -- common/autotest_common.sh@10 -- # set +x 00:36:58.404 INFO: APP EXITING 00:36:58.404 INFO: killing all VMs 00:36:58.663 INFO: killing vhost app 00:36:58.663 INFO: EXIT DONE 00:36:59.232 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:59.492 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:59.492 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:37:00.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:00.430 Cleaning 00:37:00.430 Removing: /var/run/dpdk/spdk0/config 00:37:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:00.430 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:00.430 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:00.430 Removing: /var/run/dpdk/spdk1/config 00:37:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:00.430 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:00.430 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:00.430 Removing: /var/run/dpdk/spdk2/config 00:37:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:00.430 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:00.430 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:00.430 Removing: /var/run/dpdk/spdk3/config 00:37:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:00.430 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:00.430 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:00.430 Removing: /var/run/dpdk/spdk4/config 00:37:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:00.430 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:00.430 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:00.430 Removing: /dev/shm/nvmf_trace.0 00:37:00.430 Removing: /dev/shm/spdk_tgt_trace.pid73146 00:37:00.430 Removing: /var/run/dpdk/spdk0 00:37:00.430 Removing: /var/run/dpdk/spdk1 00:37:00.430 Removing: /var/run/dpdk/spdk2 00:37:00.430 Removing: /var/run/dpdk/spdk3 00:37:00.430 Removing: /var/run/dpdk/spdk4 00:37:00.431 Removing: /var/run/dpdk/spdk_pid100007 00:37:00.431 Removing: /var/run/dpdk/spdk_pid100050 00:37:00.431 Removing: /var/run/dpdk/spdk_pid100134 00:37:00.431 Removing: /var/run/dpdk/spdk_pid100183 00:37:00.431 Removing: /var/run/dpdk/spdk_pid100541 00:37:00.431 Removing: /var/run/dpdk/spdk_pid100792 00:37:00.431 Removing: /var/run/dpdk/spdk_pid101292 00:37:00.431 Removing: /var/run/dpdk/spdk_pid101832 00:37:00.431 Removing: /var/run/dpdk/spdk_pid102428 00:37:00.431 Removing: /var/run/dpdk/spdk_pid102430 00:37:00.431 Removing: /var/run/dpdk/spdk_pid104401 00:37:00.431 Removing: /var/run/dpdk/spdk_pid104492 00:37:00.431 Removing: /var/run/dpdk/spdk_pid104582 00:37:00.431 Removing: /var/run/dpdk/spdk_pid104653 00:37:00.431 Removing: /var/run/dpdk/spdk_pid104819 00:37:00.431 Removing: /var/run/dpdk/spdk_pid104904 00:37:00.431 Removing: /var/run/dpdk/spdk_pid104989 00:37:00.431 Removing: /var/run/dpdk/spdk_pid105080 00:37:00.431 Removing: /var/run/dpdk/spdk_pid105426 00:37:00.431 Removing: /var/run/dpdk/spdk_pid106126 00:37:00.431 Removing: /var/run/dpdk/spdk_pid107476 00:37:00.431 Removing: /var/run/dpdk/spdk_pid107676 00:37:00.431 Removing: /var/run/dpdk/spdk_pid107964 00:37:00.690 Removing: /var/run/dpdk/spdk_pid108268 00:37:00.691 Removing: /var/run/dpdk/spdk_pid108827 00:37:00.691 Removing: /var/run/dpdk/spdk_pid108832 00:37:00.691 Removing: /var/run/dpdk/spdk_pid109204 00:37:00.691 Removing: /var/run/dpdk/spdk_pid109364 00:37:00.691 Removing: /var/run/dpdk/spdk_pid109529 00:37:00.691 Removing: /var/run/dpdk/spdk_pid109623 00:37:00.691 Removing: /var/run/dpdk/spdk_pid109778 00:37:00.691 Removing: /var/run/dpdk/spdk_pid109891 00:37:00.691 Removing: /var/run/dpdk/spdk_pid110582 00:37:00.691 Removing: /var/run/dpdk/spdk_pid110617 00:37:00.691 Removing: /var/run/dpdk/spdk_pid110652 00:37:00.691 Removing: /var/run/dpdk/spdk_pid110918 00:37:00.691 Removing: /var/run/dpdk/spdk_pid110952 00:37:00.691 Removing: /var/run/dpdk/spdk_pid110986 00:37:00.691 Removing: /var/run/dpdk/spdk_pid111453 00:37:00.691 Removing: /var/run/dpdk/spdk_pid111485 00:37:00.691 Removing: /var/run/dpdk/spdk_pid111946 00:37:00.691 Removing: /var/run/dpdk/spdk_pid72983 00:37:00.691 Removing: /var/run/dpdk/spdk_pid73146 00:37:00.691 Removing: /var/run/dpdk/spdk_pid73444 00:37:00.691 Removing: /var/run/dpdk/spdk_pid73540 00:37:00.691 Removing: /var/run/dpdk/spdk_pid73566 00:37:00.691 Removing: /var/run/dpdk/spdk_pid73684 00:37:00.691 Removing: /var/run/dpdk/spdk_pid73714 00:37:00.691 Removing: /var/run/dpdk/spdk_pid73843 00:37:00.691 Removing: /var/run/dpdk/spdk_pid74111 00:37:00.691 Removing: /var/run/dpdk/spdk_pid74292 00:37:00.691 Removing: /var/run/dpdk/spdk_pid74380 00:37:00.691 Removing: /var/run/dpdk/spdk_pid74478 00:37:00.691 Removing: /var/run/dpdk/spdk_pid74583 00:37:00.691 Removing: /var/run/dpdk/spdk_pid74621 00:37:00.691 Removing: /var/run/dpdk/spdk_pid74666 00:37:00.691 Removing: /var/run/dpdk/spdk_pid74727 00:37:00.691 Removing: /var/run/dpdk/spdk_pid74854 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75468 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75536 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75610 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75637 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75711 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75739 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75818 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75846 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75901 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75931 00:37:00.691 Removing: /var/run/dpdk/spdk_pid75981 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76011 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76168 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76207 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76288 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76367 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76396 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76473 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76506 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76550 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76583 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76627 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76661 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76705 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76739 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76778 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76817 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76856 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76894 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76933 00:37:00.691 Removing: /var/run/dpdk/spdk_pid76971 00:37:00.691 Removing: /var/run/dpdk/spdk_pid77010 00:37:00.691 Removing: /var/run/dpdk/spdk_pid77048 00:37:00.691 Removing: /var/run/dpdk/spdk_pid77087 00:37:00.957 Removing: /var/run/dpdk/spdk_pid77129 00:37:00.957 Removing: /var/run/dpdk/spdk_pid77172 00:37:00.957 Removing: /var/run/dpdk/spdk_pid77205 00:37:00.957 Removing: /var/run/dpdk/spdk_pid77252 00:37:00.957 Removing: /var/run/dpdk/spdk_pid77327 00:37:00.957 Removing: /var/run/dpdk/spdk_pid77447 00:37:00.957 Removing: /var/run/dpdk/spdk_pid77876 00:37:00.957 Removing: /var/run/dpdk/spdk_pid84643 00:37:00.957 Removing: /var/run/dpdk/spdk_pid85003 00:37:00.957 Removing: /var/run/dpdk/spdk_pid86215 00:37:00.957 Removing: /var/run/dpdk/spdk_pid86598 00:37:00.957 Removing: /var/run/dpdk/spdk_pid86835 00:37:00.957 Removing: /var/run/dpdk/spdk_pid86879 00:37:00.957 Removing: /var/run/dpdk/spdk_pid87764 00:37:00.957 Removing: /var/run/dpdk/spdk_pid87813 00:37:00.957 Removing: /var/run/dpdk/spdk_pid88192 00:37:00.957 Removing: /var/run/dpdk/spdk_pid88717 00:37:00.957 Removing: /var/run/dpdk/spdk_pid89151 00:37:00.957 Removing: /var/run/dpdk/spdk_pid90114 00:37:00.957 Removing: /var/run/dpdk/spdk_pid91099 00:37:00.957 Removing: /var/run/dpdk/spdk_pid91216 00:37:00.957 Removing: /var/run/dpdk/spdk_pid91278 00:37:00.957 Removing: /var/run/dpdk/spdk_pid92752 00:37:00.957 Removing: /var/run/dpdk/spdk_pid92993 00:37:00.957 Removing: /var/run/dpdk/spdk_pid93435 00:37:00.957 Removing: /var/run/dpdk/spdk_pid93545 00:37:00.957 Removing: /var/run/dpdk/spdk_pid93691 00:37:00.957 Removing: /var/run/dpdk/spdk_pid93738 00:37:00.957 Removing: /var/run/dpdk/spdk_pid93770 00:37:00.957 Removing: /var/run/dpdk/spdk_pid93810 00:37:00.957 Removing: /var/run/dpdk/spdk_pid93968 00:37:00.957 Removing: /var/run/dpdk/spdk_pid94115 00:37:00.957 Removing: /var/run/dpdk/spdk_pid94379 00:37:00.957 Removing: /var/run/dpdk/spdk_pid94497 00:37:00.957 Removing: /var/run/dpdk/spdk_pid94746 00:37:00.957 Removing: /var/run/dpdk/spdk_pid94866 00:37:00.957 Removing: /var/run/dpdk/spdk_pid94995 00:37:00.957 Removing: /var/run/dpdk/spdk_pid95339 00:37:00.957 Removing: /var/run/dpdk/spdk_pid95722 00:37:00.957 Removing: /var/run/dpdk/spdk_pid95725 00:37:00.957 Removing: /var/run/dpdk/spdk_pid97978 00:37:00.957 Removing: /var/run/dpdk/spdk_pid98290 00:37:00.957 Removing: /var/run/dpdk/spdk_pid98799 00:37:00.957 Removing: /var/run/dpdk/spdk_pid98801 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99147 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99161 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99181 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99206 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99211 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99358 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99367 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99474 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99477 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99580 00:37:00.957 Removing: /var/run/dpdk/spdk_pid99587 00:37:00.957 Clean 00:37:01.245 21:39:50 -- common/autotest_common.sh@1437 -- # return 0 00:37:01.245 21:39:50 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:37:01.245 21:39:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:37:01.245 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:37:01.245 21:39:50 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:37:01.245 21:39:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:37:01.245 21:39:50 -- common/autotest_common.sh@10 -- # set +x 00:37:01.245 21:39:50 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:01.245 21:39:50 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:01.245 21:39:50 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:01.245 21:39:50 -- spdk/autotest.sh@389 -- # hash lcov 00:37:01.245 21:39:50 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:01.245 21:39:50 -- spdk/autotest.sh@391 -- # hostname 00:37:01.245 21:39:50 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:01.504 geninfo: WARNING: invalid characters removed from testname! 00:37:28.054 21:40:14 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:28.991 21:40:17 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:30.899 21:40:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:33.438 21:40:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:35.342 21:40:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:37.922 21:40:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:40.456 21:40:29 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:40.456 21:40:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:40.456 21:40:29 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:40.456 21:40:29 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:40.456 21:40:29 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:40.456 21:40:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.456 21:40:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.456 21:40:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.456 21:40:29 -- paths/export.sh@5 -- $ export PATH 00:37:40.456 21:40:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.456 21:40:29 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:37:40.456 21:40:29 -- common/autobuild_common.sh@435 -- $ date +%s 00:37:40.456 21:40:29 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714167629.XXXXXX 00:37:40.456 21:40:29 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714167629.2tFSK1 00:37:40.456 21:40:29 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:37:40.456 21:40:29 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:37:40.457 21:40:29 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:37:40.457 21:40:29 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:37:40.457 21:40:29 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:37:40.457 21:40:29 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:37:40.457 21:40:29 -- common/autobuild_common.sh@451 -- $ get_config_params 00:37:40.457 21:40:29 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:37:40.457 21:40:29 -- common/autotest_common.sh@10 -- $ set +x 00:37:40.457 21:40:29 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:37:40.457 21:40:29 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:37:40.457 21:40:29 -- pm/common@17 -- $ local monitor 00:37:40.457 21:40:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:40.457 21:40:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=113612 00:37:40.457 21:40:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:40.457 21:40:29 -- pm/common@21 -- $ date +%s 00:37:40.457 21:40:29 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=113614 00:37:40.457 21:40:29 -- pm/common@26 -- $ sleep 1 00:37:40.457 21:40:29 -- pm/common@21 -- $ date +%s 00:37:40.457 21:40:29 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714167629 00:37:40.457 21:40:29 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714167629 00:37:40.457 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714167629_collect-vmstat.pm.log 00:37:40.457 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714167629_collect-cpu-load.pm.log 00:37:41.395 21:40:30 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:37:41.395 21:40:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:37:41.395 21:40:30 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:37:41.395 21:40:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:41.395 21:40:30 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:41.395 21:40:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:41.395 21:40:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:41.395 21:40:30 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:41.395 21:40:30 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:41.395 21:40:30 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:41.395 21:40:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:41.395 21:40:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:41.395 21:40:30 -- pm/common@30 -- $ signal_monitor_resources TERM 00:37:41.395 21:40:30 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:37:41.395 21:40:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:41.395 21:40:30 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:37:41.395 21:40:30 -- pm/common@45 -- $ pid=113621 00:37:41.395 21:40:30 -- pm/common@52 -- $ sudo kill -TERM 113621 00:37:41.395 21:40:30 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:41.395 21:40:30 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:37:41.395 21:40:30 -- pm/common@45 -- $ pid=113620 00:37:41.395 21:40:30 -- pm/common@52 -- $ sudo kill -TERM 113620 00:37:41.395 + [[ -n 6051 ]] 00:37:41.395 + sudo kill 6051 00:37:41.405 [Pipeline] } 00:37:41.424 [Pipeline] // timeout 00:37:41.429 [Pipeline] } 00:37:41.447 [Pipeline] // stage 00:37:41.453 [Pipeline] } 00:37:41.471 [Pipeline] // catchError 00:37:41.481 [Pipeline] stage 00:37:41.483 [Pipeline] { (Stop VM) 00:37:41.499 [Pipeline] sh 00:37:41.792 + vagrant halt 00:37:45.080 ==> default: Halting domain... 00:37:53.209 [Pipeline] sh 00:37:53.486 + vagrant destroy -f 00:37:56.769 ==> default: Removing domain... 00:37:56.780 [Pipeline] sh 00:37:57.060 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:37:57.069 [Pipeline] } 00:37:57.088 [Pipeline] // stage 00:37:57.093 [Pipeline] } 00:37:57.108 [Pipeline] // dir 00:37:57.113 [Pipeline] } 00:37:57.126 [Pipeline] // wrap 00:37:57.131 [Pipeline] } 00:37:57.146 [Pipeline] // catchError 00:37:57.155 [Pipeline] stage 00:37:57.156 [Pipeline] { (Epilogue) 00:37:57.169 [Pipeline] sh 00:37:57.446 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:04.025 [Pipeline] catchError 00:38:04.027 [Pipeline] { 00:38:04.039 [Pipeline] sh 00:38:04.321 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:04.321 Artifacts sizes are good 00:38:04.331 [Pipeline] } 00:38:04.347 [Pipeline] // catchError 00:38:04.360 [Pipeline] archiveArtifacts 00:38:04.366 Archiving artifacts 00:38:04.552 [Pipeline] cleanWs 00:38:04.564 [WS-CLEANUP] Deleting project workspace... 00:38:04.564 [WS-CLEANUP] Deferred wipeout is used... 00:38:04.572 [WS-CLEANUP] done 00:38:04.574 [Pipeline] } 00:38:04.593 [Pipeline] // stage 00:38:04.599 [Pipeline] } 00:38:04.620 [Pipeline] // node 00:38:04.626 [Pipeline] End of Pipeline 00:38:04.658 Finished: SUCCESS